AI Made Friendly HERE

Oracle’s AI Agent Studio is out

(Oracle’s Steve Miranda talking AI agents)

Yesterday, at Oracle CloudWorld London, Oracle announced its new AI Agent Studio. My London-based colleague Phil Wainewright was on the case: CloudWorld London – Oracle debuts its AI Agent Studio and pledges $5bn for the UK.  How does this news advance Oracle’s AI narrative? Wainewright: 

Until today, there had been no word of tooling to allow customers to build and customize their own agents. That changes as the CloudWorld caravan rolls into London, where Oracle is taking the wraps off its Agent Studio this morning.

This is the same type of Oracle agent tech, just exposed externally to customers (and partners). Wainewright adds:

The new AI Agent Studio, available at no extra cost to Oracle Fusion Apps customers, allows users to create new agents or modify pre-built agents, test them and then deploy and manage them across the enterprise. It’s based on the tooling already used by Oracle product teams to create the [50+] pre-built agents announced so far.

Why does this announcement matter? Steve Miranda’s take

Another notable detail: via the AI Agent Studio, Oracle’s customers (and partners) can not only build their own agents, they can do it with a choice of external LLMs (check Wainewright’s piece for more announcement info). This matters for AI agent governance as well as creation. As readers know, I have pretty strong views about what agents are good for, and what they are not – and how vendors should approach this with customers.

How would that clash with this news? Time to put Steve Miranda, EVP, Oracle Applications Development, in the hot seat once again. Miranda told me this announcement marks another phase in Oracle’s AI timeline: 

A year and a half ago, at the 2023 Vegas CloudWorld, we announced our first set of generative AI-based features – and we announced 50 use cases that were coming. We subsequently delivered 100 and counting. Essentially every place within our applications where you generate text or could generate text, we have an ‘AI Assist’ button – everything from creating job posts to creating item descriptions, from draft emails to prospects within CX, to narrative reporting on financials. 

So that’s a handful of the 100+ use cases that are out there, and that will continue every time we have any kind of text-to-creation… If you go back to last year at CloudWorld, roughly six months ago, we announced that we’d be building out the next step, which is agents. So 50+ agents, going from a tactical generative AI use case to a step in a business process. 

So we demoed AI agents around benefits. We have AI agents around supply chain planning. We demoed the AI agent around Document IO for automation of finance and payables, a general ledger agent for inquiry and drill down and audit – a host of these agents. And then, on stage in London, Chris Leone announced the next step in that evolution, if you will, which is the Oracle AI agent studio.

With Oracle’s AI Agent Studio, you can build teams of smaller, discrete agents, adding human steps where needed – either as a precaution, or a more permanent workflow feature. Miranda explains: 

I think the term that’s popular today is that you create teams of agents. Basically, these are the discrete agents, and you’re allowed to put them into connective steps, either via workflow or with the step that’s now called ‘human in the middle,’ where an agent does step one; a human does step two; an agent does step three. In addition to that, we have extensibility. 

That’s where customer choice in LLMs factors in:

This comes in two forms: the ability to for our customers or partners to take an agent that we’ve delivered, and that we will continue to deliver, and to modify that meaning, add prompts, or choose a different Large Language Model. There’s sometimes settings within a Large Language Model on how aggressive or non-aggressive you want the Large Language Model to be. Or you can build a completely new agent.

For example?

Let’s take the case of a recruiting process. You can break down a recruiting process into several steps: source candidates, schedule interviews, schedule follow ups, negotiate an offer, do a background check, give an offer. Let’s suppose that background check is something outside of the Oracle system. In fact, it probably is. Now, our customers, or our partners, on behalf of our customers, can build an agent that connects to a third party system, does a background check, and comes back into the AI Agent Studio to orchestrate a workflow. 

Differentiating with AI agents – Oracle makes its case

Progress? Definitely. But how is Oracle’s approach to AI agents differentiated? Miranda makes the case.

First, rewinding to the basic message you’re hearing. Lots of our competitors nowadays talk about the need for centralized information in order to run AI. You’ll hear terms like data lakes, and a lot of different offerings to bring data together. Again, we’ve been quite proud of the fact that we’re the only SaaS application that really gives you end-to-end input into your data. We’ve always had it for business reporting. Now you have it for business process automation and/or better, AI, because all of your data is, in fact, in one place. 

Miranda says training the leading models on OCI infrastructure can’t be underestimated:

Also notably: we’re the only application that there’s also in the technology business. As the application vendor sitting on top of that, we have direct access – so we call and test against Llama, against ChatGPT, against Cohere, and then, depending on which Large Language Model is best at the time for the use case, we choose that. 

Miranda cites another differentiator for agent building: a proven/compliant stack.

This isn’t agents (and Agent Studio) built on top of a generic area. It’s natively integrated to the Fusion Applications. So Agent Studio incorporates the agents that we build. It’s native to the security model, the UI model, everything to help you best orchestrate your business processes. 

My take – advice to customers on getting AI agents right

Miranda has never seen an announcement that energized Oracle’s partner community as much as this one. If he is right, that should be an asset to customers, because – let’s face it – customers have a big learning curve here. Even if agents are easy to build, deploying them brings a ream of new considerations, from risk management to agentic evaluation to outcome metrics. So, Mr. Miranda, how should your customers get started? 

 When I meet with customers today, what I tell them is, start using the 100 use cases that we’ve already given you now. Are those like, tremendous business payback? I’m not saying that, but it allows them to get used to what the AI is like. It allows them to go through all of their internal questioning on: what’s your AI security posture, what’s your AI privacy posture, what’s your AI ethics posture?

Pre-built tools push users into hands-on mode:

This allows the end users to see, ‘Here’s my use case’ – and get used to that. ‘Well, wait a minute, if I press the button again, even though it’s the same thing, it’s going to generate slightly different text for me, because it’s, a probabilistic model.’ Getting their way of thinking around that certainly helps that transition process. Frankly, the way we built it is improving those use cases, improving those agent use cases. Now we’ll improve the end-to-end workflow. 

Fellow diginomica contributor Brian Sommer and I have an ongoing skirmish on practical versus imaginative AI. Sommer is looking for the truly compelling AI scenarios, well beyond out-of-the-box starters like job description generation. Sommer isn’t wrong, but with AI, customers do need to get used to a different approach to app building and use case design – not to mention risk management. Where does Miranda stand? 

I hear from customers. ‘Hey, when am I going to have my close-your-books agent?’ Let’s say that’s what we want to build. Let’s say that’s what we want to announce. How are we going to do that? Well, there’s 1,000 steps in closing your books. You’re paying invoices; you’re matching the POS; you’re creating journals; you’re doing allocations; you’re doing revaluations, currency adjustments. We’re going to build that by building 1,000 discrete steps of these different agents, and then hooking them together. That’s how it’s going to come about.

Oracle’s inclusive AI pricing and ‘OCI-powered’ aspects are different than most enterprise software vendors. I also don’t hear the typical over-emphasis on “autonomous agents.” For now, building human supervision (and human steps) into agentic workflows is an important option. That said, customers should take risk profiles into account when getting started with agents. 

Recruitment is a good example of where the consequences of over-automation can be concerning, if not liability-inducing (granted, many companies have already taken the plunge with algorithmic screening tools, perhaps beyond where they should, so in that case, agents don’t necessarily introduce a new risk factor. Still, that’s no excuse for moving too fast in an area with higher consequences). On the other hand, getting agents involved in sales and service support/interactions is an example of a lower risk profile area where early lessons can be derived.

Oracle’s emphasis on smaller, specialized agent roles is smart for use case design and accuracy. However, even those specialized agents are not always going to do what the user expects. That’s life with probabilistic technologies, not to mention users getting a comfort level with prompts and agentic workflows. 

This is where the use case selection and design comes in, as well as building in verification and auditing steps (LLMs as auditors are one aspect of this). Oracle has a chance to detail more on how they help customers measure and improve LLM and agent accuracy. It’s a topic I will return to with just about every vendor, so Oracle hasn’t heard the last of my questions on that. 

Miranda quipped about how we’ve moved on from “hallucinations” to “probabilistic.” I’m not sure if that’s true in general, but it’s certainly true for me. Hallucinations conjure up images of ChatGPT issuing a viral, outrageous response. Enterprise vendors can largely control that type of absurd output though narrower models, specialist agents etc. Whereas “probabilistic” is a characteristic you can mitigate, but not eliminate. Ultimately, you line up the right use cases, and avoid ones where the tech doesn’t fit or the data quality is low. LLM agents work quite well when fused with more deterministic workflows, the ones I’m constantly told are legacy forms of automation. That’s a big reason why the supposed death of SaaS is on the wrong track, but that’s an argument for another time.

Meanwhile, customers need advisory on the best business outcome metrics. Miranda shared several conversations with customers on how they arrived at their metrics, from financial services to HR/talent/recruitment. There isn’t one right way to go about this; the point is to have these in place before you roll out. Miranda pared it down: 

Is AI moving the ball forward on the business metric that you care about? 

He says to start there; you’ll get no argument from me on that.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird