AI Lab

The AI Lab collaborates with the Wireless and Video Labs to bring intelligence into real-world applications like streaming, connectivity, and device performance, while pioneering technologies that are only beginning to take shape, from machine-first video compression to data-efficient AI training. By starting early, we help lay the groundwork for the capabilities and experiences that will define the next decade.

Our AI Work in Action

Our research contributes to real-world applications that people rely on today as well as new technologies.

Autonomous Security Systems

Autonomous Security Systems

AI responds faster than traditional safety systems that rely on constant human oversight. We help develop protocols that compress and translate video into formats that machines can interpret and which enable cameras to detect urgent events—like a patient falling in a hospital or an at home break-in—and trigger an immediate response.

AI-Led Inspections and Quality Control

AI-Led Inspections and Quality Control

From inspecting bridges with drones to verifying quality control on a factory conveyor belt, our AI vision and compression technology is making it possible to complete real-time industrial checks without needing to stream energy-intensive video, which allows faster, more frequent, and greener maintenance.

Smarter Edge and IoT Devices

Smarter Edge and IoT Devices

Imagine smartphones, smart watches, and industrial sensors that learn and adapt without constantly sending data to the cloud. By enabling training on small, "distilled" datasets, our work helps these devices train AI locally—improving privacy, battery life, and performance—while reducing the amount of information that must be exchanged in federated learning systems.

AI-Enabled Services

AI-Enabled Services

Video dominates wireless network traffic, yet today's systems often treat video and wireless independently. Our research bridges them by creating intelligent networks that understand an application's specific goal and enabling applications that understand the network's real-time conditions. This allows for on-the-fly optimizations, delivering precisely the right quality video for any user, human or machine, while making the most of available wireless resources.

Bridge to 6G: Spotlight on 3GPP Release 20

“Bridge to 6G: Spotlight on 3GPP Release 20”, authored by ABI Research and commissioned by InterDigital, explores how 3GPP Release 20 serves as the pivotal link between 5G-Advanced and the next-generation 6G ecosystem. With 5G reaching maturity across global markets, the document highlights the industry’s transition toward 6G, emphasizing lessons learned from 5G’s fragmented rollout and the need for simplified, energy-efficient, and AI-native network architectures.

Explore Our AI Research Areas in Depth

See where we focus our AI research in the short and long term to enable emerging technologies:

Data Efficiency

Data Efficiency

Making advanced AI practical requires reducing its dependence on massive datasets. Our work in data efficiency enables powerful machine learning models to be trained on just a fraction of the data traditionally required. By distilling knowledge into compact, representative datasets, we make AI more accessible to even the most resource-constrained devices. These advances allow AI to operate where it once could not, accelerating innovation.

Physics-Informed Models

Physics-Informed Models

Even when trained with massive datasets, AI typically cannot account for every rare yet critical event—like a self-driving car navigating a sudden blizzard—but our Physics-Informed Models are solving this problem. By embedding the fundamental laws of physics directly into our AI, we can generate endless streams of physically realistic data from just a small set of examples and train robust, reliable AI systems to respond safely and effectively to the unexpected.

Interoperability

Interoperability

AI models are often built in silos, yet today's applications increasingly demand that systems work together seamlessly. We enable this system-wide collaboration with our work on interoperability, creating a common "language" that allows models developed independently and deployed across different entities to work together seamlessly. This transforms AI from a collection of standalone solutions into a connected ecosystem that performs better, more reliably, and more efficiently.

Context-Aware Video Streaming

Context-Aware Video Streaming

Autonomous systems with strict limits on size, weight, power, and cost (SWaP-C) rely on streams optimized for maximum inference accuracy by AI, but they also need smooth, responsive video for humans—critical when an unexpected event occurs and teleoperators must take control. Our technology helps fulfill both needs by making the system context-aware, so it can re-optimize and pivot from machine-centric data integrity to human-centric perceptual quality in real-time.

Feature Coding for Machines (FCM)

Feature Coding for Machines (FCM)

Thus far, compression efforts have focused on preserving high-quality videos for human vision rather than preserving video features critical for machine perception, but analysis suggests that machines will be the primary consumers of video content in the future. We are exploring how to preserve the information machines need for tasks like object detection, scene analysis, and autonomous navigation while remaining compact and efficient enough for practical uses in robotics, transportation, and video analytics.

Custom AI Tools

Custom AI Tools

Alongside work in standards and emerging technologies, InterDigital's AI Lab also applies AI expertise to the company itself. We evaluate existing platforms and build tailored tools designed to accelerate our research processes, improve productivity, and enable faster innovation. These internal projects also offer a proving ground to test new AI capabilities in real-world workflows before applying them to broader wireless and video research.

Where We Are Leading AI Research

The AI Lab's primary standards contributions go to the Moving Pictures Experts Group (MPEG), including:

  • MPEG-FCM (Feature Compression for Machines)
  • MPEG-NNC (Neural Network Compression)
  • MPEG VCM (Video Coding for Machines)
  • MPEG-I SD (Scene Description)
  • MPEG-3DGS (3D Gaussian Splats)

Our open-source software, CompressAI-Vision, has been adopted as an evaluation tool for MPEG-FCM.

Through 3GPP and ISO/ITU's Joint Video Experts Team (JVET), we contribute neural network-based video coding expertise to JVET AhG 11 and JVET NNPFC SEI.

We also drive the adoption of AI within wireless and video standards through leadership roles in MPEG, 3GPP, JVET, and IEEE.

standards organization logo
standards organization logo
standards organization logo
standards organization logo