Agentic AI Needs Fewer, Not More, Guardrails

We typically think of agentic AI in the form of purpose-built agents designed to perform specific tasks in precisely the ways we ask them to. But that’s limiting the transformative potential of agentic AI. When we unbound AI agents, we allow them to discover new ways of doing things that we may not have conceived of. I’ll explain.

Aptly named, agents have agency. But they also have memory, creativity and the ability to reason without emotion. If you feed an agent information from multiple sources and ask it for something from a company’s 10-K filing, it knows to go to SEC databases to pull that information, not news reporting about that information. It knows where to look, where not to look, and how to find the information it needs. Ask it for the same information about another company and it will recall the swift and efficient process it took to get it and execute the same process again.

In this way, agents may be the very best designers of new agents. Agents making agents sounds like the stuff of science fiction novels, but it’s not a bad way to go about realizing the maximum potential of agentic AI. Agents can surface things that we don’t see. An entire ecosystem of agents that do specific things well could be really powerful.

At Finch AI, we use agents to dynamically update our knowledgebase. Our technology recognizes entities in text and, when possible, automatically disambiguates them. We do this based on a proprietary approach to topic modeling, entity understanding and other analytics.

But when we cannot disambiguate an entity – when we only recognize an entity as a discrete thing but nothing more – we task AI agents with going out and finding information about the entity, collecting all of its attributes and metadata and creating a new knowledgebase entry.

As an example, “street gang” is not an entity type or data attribute in our knowledgebase. At least, it wasn’t until the term began appearing in recent news with more frequency than before. Our extraction methods extracted the street gang “Tango Blast” as a thing, but that was it.

When something like this occurs, we task agents with gathering information and context about the newly discovered entity type and then we ask them to create new knowledgebase entries and add them to our knowledgebase. This agent-driven process ensures that emerging entities of interest are captured and that information about them is captured as well.

As another example, today watchlists and alerts are often simple, repeated queries of a dataset. An agentic AI approach would search the web, common knowledge bases, graphs, Google Maps and more to constantly look for new information. This would essentially build an agentic knowledge asset to populate and retrieve information, then persist a query or a prompt, and alert you when something changes. The agent could handle long, complex, multifaceted queries. Together, agents and prompts can allow us to do things we never thought of doing.

There is a danger in unbounded agentic AI, of course. But there is also a danger in getting too prescriptive in what we tell our agents to do. It stifles their ability to transform tasks. It’s analogous to not knowing what you don’t know. We should adopt an exploratory mindset and allow agents to do what they do best – because that’s where true discovery happens.

And that’s exactly what we’re doing at Finch AI. Get in touch with us to learn more: sales@finchai.com.