AI Made Friendly HERE

The rise of agentic AI in a world of physical devices :: WRAL.com

For decades, machine-to-machine (M2M)
communication has quietly powered the infrastructure of our digital world. From
automated meter readings to fleet management systems, these interactions have
been primarily rule-based: one machine detects a status and sends a message to
another, often triggering a predetermined response. These systems have been
foundational in logistics, utilities, and industrial automation, delivering
speed and consistency.

But what happens when the machines
involved are no longer just communicating—they are thinking, deciding, and
negotiating? As digital complexity scales, static scripts and centralized
control architectures often fall short. Enter agentic AI.

In the era of agentic AI, software agents
embedded in physical devices can pursue goals, learn from outcomes, and
interact with each other with increasing autonomy. These agents are capable of
interpreting context, adjusting behavior dynamically, and prioritizing
objectives. The shift from M2M to agent-to-agent (A2A, or more specifically
with hardware, MA2MA) communication represents a fundamental evolution in how
machines operate in the real world—less like code execution, more like
conversation and collaboration.

Agentic AI vs. Generative AI

To baseline, let’s remember that agentic
AI is quite different from generative AI tools that have dominated recent
headlines. Generative AI (like ChatGPT and DALL-E) creates new content from
massive databases populated with a wide range of different kinds of data. And
frequently that data is “old”, with even the best tools still relying on
training from data >1 year ago. Retrieval augmented generation (RAG) is
getting better, allowing generative AI tools to scan the web for more recent
data, but the tools are better for legacy research and creation than real-time
engagement. Generative AI excels at pattern recognition, synthesis, and
expression—from writing stories to generating business plans or producing
realistic audio and visuals.

Agentic AI, on the other hand, is about
action. AI agents are trained on extremely narrow and deep, domain-specific
data sets and often tied to real-time sources of data like IoT data streams and
dynamic database API’s. An agentic system senses its environment, makes
autonomous decisions, adapts its behavior, and operates toward a goal. Unlike
generative AI, which outputs content, agentic AI outputs decisions and actions.

Let’s look at some examples of how
agentic AI can be embedded into physical systems.

Industrial
automation: Imagine a warehouse robot that not only
picks products but decides when to recharge, avoids high-traffic zones based on
real-time updates, and coordinates with other robots to balance workload. If a
new shipment is delayed, the agents update their plan and prioritize other
tasks. This is not scripted automation—this is a hardware system with agency.

Smart
energy: Consider a smart HVAC unit that doesn’t just
respond to a thermostat, but negotiates energy use with other appliances in a
home based on real-time electricity prices, personal preferences, and weather
forecasts. If a major storm is forecast, the HVAC might collaborate with a
solar battery system to store extra power in advance.

Supply
chain logistics: In supply chain management, agentic
systems can negotiate pricing and timing with each other across companies’
platforms. An AI-driven shipping container may decide to reroute itself if it
detects a bottleneck at the originally planned port.  And once in port, container cranes,
autonomous trucks, and dock scheduling software now operate with AI agents.
When a ship docks early, agents communicate in real time to shuffle unloading
schedules, reroute trucks, and reduce idle time. Each agent understands both
its local constraints and the broader system goals. The result is reduced fuel
usage, higher throughput, and increased resilience against last-minute changes.

Agriculture: In precision agriculture, drone fleets equipped with agentic software
can work collaboratively. One drone might detect high weed density and alert
others to increase pesticide application in that area. Meanwhile, soil sensors
negotiate irrigation adjustments with the drones based on moisture levels and
upcoming weather. This eliminates the need for constant human oversight,
allowing farmers to focus on broader resource planning.

The shift to agent-to-agent communication

With agentic AI embedded in devices,
communication becomes semantic and context-driven. Agents aren’t just
exchanging sensor data; they’re negotiating plans, adapting priorities, and
collaborating across domains.

It is important to note that AI agents
are ideal for edge applications. In the agriculture example above, the soil
sensors do not need to have the processing power and long-range connectivity to
analyze weather reports directly.  They
can have intelligence that monitors local soil conditions and waters based on
those isolated measurements, in absence of other data. But when a drone comes
by to share additional intelligence, the agent can now change from its original
rule-based approach to a smarter system level decision – all without a high
demand on processing or connectivity – which keeps the device simple and low
cost.

So how are hardware embedded AI agents
actually deployed today?

Real-world applications and implications

Manufacturing:
Self-healing production lines

In smart factories, equipment embedded
with agentic AI can detect potential failure before it happens and autonomously
shift workflows to alternate machines. The goal isn’t just predictive
maintenance—it’s resilient operations where machines actively collaborate to
keep the line running. The biggest cost to a manufacturing facility is
downtime. Predictive maintenance was the best that old M2M techniques could
achieve. Embedded AI agents take us to the next level. Human operators can
supervise dozens of processes without needing to intervene in most issues.

Healthcare:
Patient-centric agents

Wearable monitors like continuous
glucose sensors are being paired with insulin pumps that can automatically
adjust dosing. But more importantly, agentic systems can now integrate exercise
data, diet patterns, and patient behavior to make dynamic care adjustments. In
a recent clinical trial, patients using agentic closed-loop systems saw a more
than 11% improvement of glycemic control (Time in Range, or the time patients
managed to keep blood-sugar at the proper level) over those using manual
devices. And as an unexpected side-benefit, patients saw an average 3.3 lb
weight loss over the first month of AI automated support. As healthcare shifts
toward personalized models, agents will play a crucial role in dynamic therapy
and diagnostics.

Urban
Infrastructure: Smart streets that adapt

In a pilot program in Helsinki, traffic
lights, electric buses, and street cameras operate as agents on a shared
protocol. If pedestrian density increases in one area, traffic signals
coordinate to prioritize foot traffic, while buses adjust routes to alleviate
congestion. During emergencies, agentic traffic systems can create
rapid-response corridors for first responders without requiring centralized
override.

Energy:
Autonomous microgrids

In emerging microgrid projects, smart
homes with solar panels and batteries act as agents that trade electricity with
neighbors or back into the grid. During peak hours, homes can reduce load
collectively. When power lines go down, these homes can isolate and operate in
peer-to-peer mode, autonomously maintaining power within the community.

Are
we ready to trust agentic AI hardware?

This MA2MA future is not without hurdles.
Interoperability between different manufacturers’ agents remains a significant
technical challenge. Without shared ontologies and communication protocols,
agents may talk past each other—or worse, make conflicting decisions. The
development of universal agent languages and agent-to-agent APIs is a growing
area of focus.

Security is another concern. With agents
making autonomous decisions, a compromised agent could have outsized influence.
Who certifies the behavior of agents? How do we define acceptable ranges of
action? Can we detect if an agent is acting maliciously or incorrectly before
damage is done?

Ethically, the move toward machine agency
forces us to revisit accountability. If a self-driving delivery bot reroutes to
avoid danger and causes a delay, who is responsible? The designer? The owner?
The agent? These questions will require updates to legal and insurance
frameworks.

There is also the question of unintended
consequences. Agents that are rewarded for efficiency might ignore
human-centric considerations like fairness, accessibility, or long-term risk
unless explicitly coded to account for them. I think this may be the biggest
risk, as we look towards the future.

The “industry of business” has always
been predisposed towards maximizing profit. And the typical way that emerging
technologies enter industry is first through creation of efficiency gains. If
we program agents with heavy algorithmic weighting towards efficiency, we miss
the opportunity for other kinds of value creation to emerge.

How should the industry proceed?

As agentic AI becomes increasingly common
in physical devices, we must consider how to shape this future responsibly. A
few key focus areas include:

●    
Standardization: Initiatives like the IEEE
P7000 series are beginning to define ethical and functional standards for
autonomous systems. These frameworks help designers embed values into their
agents early in the development process. [Aside
– I will take a deep dive into this series of standards in next week’s article].

●    
Policy: Local governments and national
regulators will need frameworks that treat agentic systems as semi-autonomous
actors. In many ways, the policy discussion will mirror the evolution of cyber
policy—just with more unpredictable actors.

●    
Design: Entrepreneurs and engineers must think
not just about functionality, but about negotiation, cooperation, and alignment
of values across agent networks. Design tools must evolve to allow simulation
of agentic interactions before deployment. There will be a huge intersection
here with digital twin technologies, and areas that are further ahead in
development of digital twin models (manufacturing, smart cities) may have an
early-mover advantage here.

●    
Education: A new generation of technologists
must be trained not just in machine learning, but in multi-agent coordination,
ethics, and socio-technical systems. This is a huge risk area. We have never
seen the tech sector to proactively consider adding behavioral scientists,
anthropologists or other experts in the humanities to their design teams. The
closest hires are UI/UX (user interface / user experience) experts, who tend to
focus on the efficiency metrics I described above. As we design technology
tools that are to make decisions like humans, we must have experts in human
behavior on the early product team.

The transition from machines that follow
commands to machines that form strategies represents a profound shift in how we
interact with technology. This new world of agentic AI won’t just automate—it
will negotiate, adapt, and in many cases, surprise us.

As we venture further into this world, we
must prepare for new forms of digital negotiation, cooperation, and even
competition. And in the spirit of agentic AI, the question isn’t just what
machines will do for us. It’s what they will choose to do—with us, and with
each other.

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird