AI Made Friendly HERE

How Multi-Agent LLMs Can Enable AI Models to More Effectively Solve Complex Tasks

Most organizations today want to utilize large language models (LLMs) and implement proof of concepts and artificial intelligence (AI) agents to optimize costs within their business processes and deliver new and creative user experiences. However, the majority of these implementations are ‘one-offs.’ As a result, businesses struggle to realize a return on investment (ROI) in many of these use cases.

Generative AI (GenAI) promises to go beyond software like co-pilot. Rather than merely providing guidance and help to a subject matter expert (SME), these solutions could become the SME actors, autonomously executing actions. For GenAI solutions to get to this point, organizations must provide them with additional knowledge and memory, the ability to plan and re-plan, as well as the ability to collaborate with other agents to perform actions.

While single models are suitable in some scenarios, acting as co-pilots, agentic architectures open the door for LLMs to become active components of business process automation. As such, enterprises should consider leveraging LLM-based multi-agent (LLM-MA) systems to streamline complex business processes and improve ROI.

What is an LLM-MA System?

So, what is an LLM-MA system? In short, this new paradigm in AI technology describes an ecosystem of AI agents, not isolated entities, cohesively working together to solve complex challenges.

Decisions should occur within a wide range of contexts, just as reliable decision-making amongst humans requires specialization. LLM-MA systems build this same ‘collective intelligence’ that a group of humans enjoys through multiple specialized agents interacting together to achieve a common goal. In other words, in the same way that a business brings together different experts from various fields to solve one problem, so too do LLM-MA systems operate.

Business demands are too much for a single LLM. However, by distributing capabilities among specialized agents with unique skills and knowledge instead of having one LLM shoulder every burden, these agents can complete tasks more efficiently and effectively. Multi-agent LLMs can even ‘check’ each other’s work through cross-verification, cutting down on ‘hallucinations’ for maximum productivity and accuracy.

In particular, LLM-MA systems use a divide-and-conquer method to acquire more refined control over other aspects of complex AI-empowered systems – notably, better fine-tuning to specific data sets, selecting methods (including pre-transformer AI) for better explainability, governance, security and reliability and using non-AI tools as a part of a complex solution. Within this divide-and-conquer approach, agents perform actions and receive feedback from other agents and data, enabling the adoption of an execution strategy over time.

Opportunities and Use Cases of LLM-MA Systems

LLM-MA systems can effectively automate business processes by searching through structured and unstructured documents, generating code to query data models and performing other content generation. Companies can use LLM-MA systems for several use cases, including software development, hardware simulation, game development (specifically, world development), scientific and pharmaceutical discoveries, capital management processes, financial and trading economy, etc.

One noteworthy application of LLM-MA systems is call/service center automation. In this example, a combination of models and other programmatic actors utilizing pre-defined workflows and procedures could automate end-user interactions and perform request triage via text, voice or video. Moreover, these systems could navigate the most optimal resolution path by leveraging procedural and SME knowledge with personalization data and invoking Retrieval Augmented Generation (RAG)-type and non-LLM agents.

In the short term, this system will not be fully automated – mistakes will happen, and there will need to be humans in the loop. AI is not ready to replicate human-like experiences due to the complexity of testing free-flow conversation against, for example, responsible AI concerns. However, AI can train on thousands of historical support tickets and feedback loops to automate significant parts of call/service center operations, boosting efficiency, reducing ticket resolution downtime and increasing customer satisfaction.

Another powerful application of multi-agent LLMs is creating human-AI collaboration interfaces for real-time conversations, solving tasks that were not possible before. Conversational swarm intelligence (CSI), for example, is a method that enables 1000s of people to hold real-time conversations. Specifically, CSI allows small groups to dialog with one another while simultaneously having different groups of agents summarize conversation threads. It then fosters content propagation across the larger body of people, empowering human coordination at an unprecedented scale.

Security, Responsible AI and Other Challenges of LLM-MA Systems

Despite the exciting opportunities of LLM-MA systems, some challenges to this approach arise as the number of agents and the size of their action spaces increase. For example, businesses will need to address the issue of plain old hallucinations, which will require humans in the loop – a designated party must be responsible for agentic systems, especially those with potential critical impact, such as automated drug discovery.

There will also be problems with data bias, which can snowball into interaction bias. Likewise, future LLM-MA systems running hundreds of agents will require more complex architectures while accounting for other LLM shortcomings, data and machine learning operations.

Additionally, organizations must address security concerns and promote responsible AI (RAI) practices. More LLMs and agents increase the attack surface for all AI threats. Companies must decompose different parts of their LLM-MA systems into specialized actors to provide more control over traditional LLM risks, including security and RAI elements.

Moreover, as solutions become more complex, so must AI governance frameworks to ensure that AI products are reliable (i.e., robust, accountable, monitored and explainable), resident (i.e., safe, secure, private and effective) and responsible (i.e., fair, ethical, inclusive, sustainable and purposeful). Escalating complexity will also lead to tightened regulations, making it even more paramount that security and RAI be part of every business case and solution design from the start, as well as continuous policy updates, corporate training and education and TEVV (testing, evaluation, verification and validation) strategies.

Extracting the Full Value from an LLM-MA System: Data Considerations

For businesses to extract the full value from an LLM-MA system, they must recognize that LLMs, on their own, only possess general domain knowledge. However, LLMs can become value-generating AI products when they rely on enterprise domain knowledge, which usually consists of differentiated data assets, corporate documentation, SME knowledge and information retrieved from public data sources.

Businesses must shift from data-centric, where data supports reporting, to AI-centric, where data sources combine to empower AI to become an actor within the enterprise ecosystem. As such, companies’ ability to curate and manage high-quality data assets must extend to those new data types. Likewise, organizations need to modernize their data and insight consumption approach, change their operating model and introduce governance that unites data, AI and RAI.

From a tooling perspective, GenAI can provide additional help regarding data. In particular, GenAI tools can generate ontologies, create metadata, extract data signals, make sense of complex data schema, automate data migration and perform data conversion. GenAI can also be used to enhance data quality and act as governance specialists as well as co-pilots or semi-autonomous agents. Already, many organizations use GenAI to help democratize data, as seen in ‘talk-to-your-data’ capabilities.

Continuous Adoption in the Age of Rapid Change

An LLM does not add value or achieve positive ROI by itself but as a part of business outcome-focused applications. The challenge is that unlike in the past, when the technological capabilities of LLMs were somewhat known, today, new capabilities emerge weekly and sometimes daily, supporting new business opportunities. On top of this rapid change is an ever-evolving regulatory and compliance landscape, making the ability to adapt fast crucial for success.

The flexibility required to take advantage of these new opportunities necessitates that businesses undergo a mindset shift from silos to collaboration, promoting the highest level of adaptability across technology, processes and people while implementing robust data management and responsible innovation. Ultimately, the companies that embrace these new paradigms will lead the next wave of digital transformation.


Originally Appeared Here

You May Also Like

About the Author:

Early Bird