ZDNET
Increasingly, we hear about AI agents being the new “digital workers” — a concept that arose before agentic or generative AI hit the mainstream in areas such as robotic process automation. Digital workers are designed to serve the discipline and obedience, but just like human workers, they, too, have their quirks.
Also: 15 ways AI saved me time at work in 2024 – and how I plan to use it in 2025
The movement toward a digital workforce has been taking big leaps lately, marked recently by Salesforce’s unveiling of Agentforce 2.0, a digital labor platform for enterprises. The platform enables “a limitless workforce through AI agents for any department, assembled using a new library of pre-built skills, and that can take action across any system or workflow.” The platform also takes steps well beyond RPA, featuring “enhanced reasoning and data retrieval to deliver precise answers and orchestrate actions in response to complex, multi-step questions,” according to a statement from Salesforce. The agents even interact in Slack.
Augmenting teams with digital labor
Major organizations are leveraging the platform to augment their teams with digital labor, the vendor added.
Talent is scarce and expensive to train, so organizations are turning to AI to help with customer interactions and deal with workflow backlogs, but can no longer afford “inadequate solutions that provide generic responses,” Salesforce stated. “Existing solutions such as copilots struggle to provide accurate, trusted responses to complex requests — such as personalized guidance on a job application. They cannot take action on their own — like nurturing a lead with product recommendations.”
Autonomous digital workers can now perform such work at many levels, industry leaders agree. “The convergence of skilled innovators, rapidly-deployable cloud tools, customer awareness and executive support has created an ideal environment for agentic AI to thrive in 2025,” Chris Bennett, director of AI transparency and education at Motorola Solutions, told ZDNET.
For example, Motorola Solutions has begun leveraging agentic AI “to improve public safety and enterprise security, with applications that analyze and surface data in real-time to provide crucial, immediate support to first responders and security personnel,” Bennett stated. “AI agents never get bored, tired, or distracted, automating repetitive tasks and freeing responders for critical responsibilities and community engagement. AI agents can accelerate tasks like reviewing historical video footage, helping investigators quickly find missing persons through natural language search.”
This works via AI agents intuiting processes to “create a series of steps, or a recipe to solve a problem,” said Viswesh Ananthakrishnan, co-founder and vice president of Aurascape. They can also “take actions to execute these steps and even collaborate with other agents to do so. When combined together, this data gives the agents a view of how the enterprise functions.”
Also: OpenAI’s o3 isn’t AGI yet but it just did something no other AI has done
The AI agents then “develop and execute complex processes, like viewing demand forecasts and taking proactive action to generate and submit order forms for more inventory before supplies run low,” he continued. “This type of automation saves workers significant time and frees them up from repetitive tasks.”
AI agents need to be thoughtfully managed
At the same time, AI agents need to be thoughtfully managed, just as is the case with human work, and there’s work to be done before an agentic AI-driven workforce can truly assume a broad range of tasks. “While the promise of agentic AI is evident, we are several years away from widespread agentic AI adoption at the enterprise level,” said Scott Beechuk, partner with Norwest Venture Partners. “Agents must be trustworthy given their potential role in automating mission-critical business processes.”
The traceability of AI agents’ actions is one issue. “Many tools have a hard time explaining how they arrived at their responses from users’ sensitive data and models struggle to generalize beyond what they have learned,” said Ananthakrishnan.
Unpredictability is a related challenge, as LLMs “operate like black boxes,” said Beechuk. “It’s hard for users and engineers to know if the AI has successfully completed its task and if it did so correctly.” In addition, he cautions that there is still unreliability in AI agents. “In systems where AI creates its own steps to complete tasks, made-up details can lead to more errors as the task progresses, ultimately making the outputs unreliable.”
Also: Why ethics is becoming AI’s biggest challenge
Human workers also are capable of collaborating easily and on a regular basis. For AI workers, it’s a different story. “Because agents will interact with multiple systems and data stores, achieving comprehensive visibility is no easy task,” said Ananthakrishnan. It’s important to have visibility to capture each action an agent takes. “This means deep visibility into activity on endpoint devices and the ability to process data in a vast variety of formats.” Then, it’s important to be able to “quickly combine this context from endpoints with network-level traffic to determine the data informing the agent’s actions,” as well as “recognize the type of AI agent interfacing with your data, whether it’s a trusted entity, or a brand-new agent.”
The AI systems engineer
This may boost an emerging human-centered role — the AI systems engineer. “This new quality assurance and oversight role will become essential to enterprises as they manage and continuously optimize AI agents,” Beechuk said.
In multi-agent environments, “AI agents will be interacting and evolving constantly, consuming a steady diet of new data to perform their individual jobs,” he explained. “When one of them gets bad data — intentionally or unintentionally — and changes its behavior, it can start performing its job incorrectly or with less precision, even if it was doing it perfectly well just one day before. An error in one agent can then have a cascading effect that degrades the whole system. Enterprises will hire as many AI systems engineers as it takes to keep that from happening.”
Also: Generative AI is now a must-have tool for technology professionals
Companies and tech teams may be “well-positioned to support agentic AI, but we still need time and experience to strike the right balance between agentic and human workflows,” Bennett advised. “Our advice is to view AI as an augmentation to human experts, not a replacement.”