"What makes an AI system an agent?"
Language Log 2025-09-04
And what are the consequences of the growing population of AI agents?
In "Agentic culture", I observed that today's "AI agents" have the same features that made "Agent Based Models", 50 years ago, a way to model the emergence and evolution of culture. And I expressed surprise that (almost) none of the concerns about AI impact have taken account of this obvious fact.
There was a little push-back in the comments, for example the claim that "There may come a time when AI is autonomous, reflective and has motives, but that is a long, long way off." Which misses the point, given the entirely unintelligent nature of old-fashioned ABM systems.
Antonio Gulli from Google has recently posted Agentic Design Systems, which offers some useful (and detailed) descriptions of the state of the agentic art, along with example code.
The section on "What makes an AI system an Agent?" sets the stage:
In simple terms, an AI agent is a system designed to perceive its environment and take actions to achieve a specific goal. It's an evolution from a standard Large Language Model (LLM), enhanced with the abilities to plan, use tools, and interact with its surroundings. Think of an Agentic AI as a smart assistant that learns on the job. It follows a simple, five-step loop to get things done (see Fig.1):
-
- Get the Mission: You give it a goal, like "organize my schedule."
- Scan the Scene: It gathers all the necessary information—reading emails, checking calendars, and accessing contacts—to understand what's happening.
- Think It Through: It devises a plan of action by considering the optimal approach to achieve the goal.
- Take Action: It executes the plan by sending invitations, scheduling meetings, and updating your calendar.
- Learn and Get Better: It observes successful outcomes and adapts accordingly. For example, if a meeting is rescheduled, the system learns from this event to enhance its future performance.
At that point, Gulli notes that "Agents are becoming increasingly popular at a stunning pace".
And the chapter on "Inter-Agent Communication" explains:
Individual AI agents often face limitations when tackling complex, multifaceted problems, even with advanced capabilities. To overcome this, Inter-Agent Communication (A2A) enables diverse AI agents, potentially built with different frameworks, to collaborate effectively. This collaboration involves seamless coordination, task delegation, and information exchange.
Google's A2A protocol is an open standard designed to facilitate this universal communication. This chapter will explore A2A, its practical applications, and its implementation within the Google ADK.
We'll see how seamless and effective those agentic collaborations turn out to be.
One obvious question: whose interests will determine what counts as a "successful" outcome? The various human and institutional participants may have quite different ideas about this. And the AI agents will certainly develop their own (artificial analog of) interests, goals, and preferences, as Gulli's sketch tells us.
And again, these agentic interactions will foster emergent cultures, whose alignment with the goals of human individuals and groups is worth more thought than it's gotten so far. (Except in dystopian novels and movies…)