As AI moves beyond software, a new category — physical AI — is beginning to take shape. Unlike generative or agentic AI that operates in digital environments, physical AI embeds intelligence into robots, drones, vehicles and industrial systems, enabling them to perceive, reason and act autonomously in the physical world.
“GenAI and agentic AI systems operate in software environments to reason, generate content, and orchestrate workflows. Physical AI extends that intelligence into the real world. As models improve and edge compute becomes cheaper, we are moving from ‘AI that advises’ to ‘AI that acts’. This transition unlocks value across logistics, manufacturing, defence, agriculture and mobility,” Medha Kannapally, Associate, Endiya Partners, explained.
The firm has invested in Perceptyne Robotics, which is building AI-driven, dual-arm dexterous semi-humanoid robots that combine perception, control and adaptive intelligence, enabling factories to automate high-precision, high-variability tasks.
Traditional automation
Mrutyunjaya Nadiminti, Co-Founder, CBO & Co-CTO of Perceptyne Robots, said that unlike traditional automation — which relies on pre-programmed instructions and fixed environments — the company’s approach enables robots to adapt to real-world changes using data-driven learning, real-time sensory feedback, and closed-loop control.
Perceptyne’s physical AI automates dexterous, human-like manufacturing tasks, allowing robots to perform complex workflows such as varied pick-and-place, precision assembly, and fixture-free part handling, along with packaging, inspection, and testing. By combining vision, force and tactile sensing with AI-led control, the systems adjust to real-world variability rather than operating in rigid, predefined setups.
Physical AI can also be used to train drones to access landslide-hit zones, flood-affected areas, and remote terrain where conventional mobility is disrupted.
Gaurav Achha, the co-founder & Co-CEO of BonV Aero, shared that its drones operate on fully autonomous flight missions. Once mission coordinates are fed into the system, the UAV independently navigates mountainous terrain, manages altitude profiles above 13,000 ft AMSL, and covers distances of up to 5 km one way without manual intervention. This requires real-time sensor fusion, terrain awareness, adaptive flight control, propulsion efficiency management, and fail-safe redundancy — all hallmarks of Physical AI.
Digital decisions
“Where gen AI creates information, and agentic AI executes digital decisions, physical AI executes physical outcomes. Our physical AI is designed to perform mission-critical tasks that demand real-time decision-making in harsh, unpredictable environments, especially where human intervention is limited or impractical,” he said.
Meanwhile, AI-powered fleet safety and performance solutions provider Netradyne deploys physical AI directly on commercial vehicles. Its system uses on-device cameras and sensors to continuously capture real-world environments, fused perception pipelines that combine visual and sensor data into real-time situational awareness, and on-device reasoning with millisecond-level risk estimation.
“Our vision-based physical AI acts as an active safety partner inside the vehicle. It processes camera and sensor data in real time, continuously monitors driving conditions, understands risk, and provides immediate in-cab alerts to prevent accidents. Physical AI must work within milliseconds because it is directly connected to real-world safety,” Teja Gudena, Executive Vice-President- Engineering at Netradyne, shared.
Netradyne’s system, instead of analysing single frames, creates a persistent world model at the edge, which allows the AI to reason over time and detect objects, and interpret evolving risk and behavioural intent as situations unfold.
Autonomous mobility
Meanwhile, Addverb, an industrial robotics and warehouse automation player, has systems built to work in changing environments. Physical AI allows robots to handle tasks that include autonomous mobility, material handling, adaptive manipulation, inspection processes, and the provision of assistance to human operators in warehouses and manufacturing plants.
The company is building platforms that can serve several functions as workflows change, instead of designing robots with a single purpose. This enables industries to scale automation without having to repeatedly redesign hardware.
“Our system relies on multimodal sensing combined with AI-driven interpretation. It integrates vision, spatial sensing, motion awareness, and force feedback to build a real-time understanding of its surroundings.
This data is processed through a vision-language, action framework that connects what the robot sees, what it is instructed to do, and how it executes movement. The continuous perception–action loop enables the robot to interpret context, adapt to changes in objects or human movement, and operate safely,” Bir Singh, Co-Founder, Addverb, said.
While India is not yet at the frontier of advanced robotics platforms like the US or China, physical AI may be one of the first deep-tech cycles where we can meaningfully compete, Kannapally noted.
“There are three reasons: software depth, cost innovation, and domestic problem density. Physical AI is still early globally, which gives India a credible opportunity to build category leaders, particularly in domain-specific applications. We may not lead in humanoid robotics tomorrow, but in industrial drones, logistics autonomy, and edge robotics platforms, we absolutely can.”
Published on March 1, 2026
