AI adoption is growing across industry and academia alike, with this growth comes interest in specific areas of development. One such area of growing interest is AI agents. Jacob Kenney, Founder and CEO of Corithos, explores.
Available descriptions of AI agents are varied and often overcomplicated.
Anthropic, for example, defines AI agents as: ‘fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks’. This can present in many ways.
However, in its most common form an AI agent is simply 2 parts:
- A decision maker or ‘brain’ (often an LLM deciding what actions to take)
- Some number of ‘tools’ which allow the brain to interact with the world or it’s environment — also known as an agent-computer interface (ACI)
Brains and tools: how to develop AI agents
Agents are now emerging to automate complex manual workflows. Following the simplified description of AI agents as just a ‘brain’ attached to ‘tools’, a logical process can be followed for developing agents.
Firstly, the brains of the agent, or the LLM to use, should be decided upon. A good process to follow for this is beginning with the smallest possible model, then testing, and iteratively selecting more powerful models until the result is consistently appropriate. At which point, the model selected is ‘right sized’ for the problem to maximise efficiency and minimise API spends.
Similarly, the tools required to automate a task can be reasoned about by answering the following questions:
- What task is being automated?
- What would someone need to do, beyond writing text, to complete the task?
- What applications (‘tools’) could or should be used for these capabilities?
To answer question 3, you should consider existing model context protocol (MCP) tools, so that you don’t need to engineer your own solutions. For any tools where an existing MCP doesn’t exist, you can create your own.
Trial and error
Guidance on developing AI agents generally recommends beginning with the simplest possible solution, then gradually adding complexity only when necessary. This results in a more effective and appropriate solution than beginning from complexity unnecessarily.
Always start from the minimum viable product (MVP), and iteratively add from there. This approach expedites time to value by delivering an MVP fast, and by limiting overengineering.
A case study
At Corithos, we are already using AI agents to automate highly complex tasks for industry leading partners. The following is a case study describing how we built an AI agent to make the web 2.38x faster, using 49.5% less CPU and 88.4% less memory.
Problem
UK Enterprise spends £9 billion on cloud services annually, growing at 30%. Ofcom estimate 65% of total spend is compute. The compute use of a web application is highly dependent on the programming language used. Countless mission-critical systems are written in legacy programming languages with limited support, and limited access to a small talent pool.
- Code migration is an expensive, time consuming task that can improve performance, increase efficiency, reduce variable costs and update legacy code. However, when considering code migration, enterprise leaders face an impossible dilemma:
- Continue relying on inefficient legacy code, so that all development time can be focused on pushing new features. At the same time, cloud costs rise and technical debt grows
- Manually migrate code to maximise efficiency and modernise legacy systems. At the same time, the backlog grows because development time is being redirected to migration efforts
The task to automate in this case, is the complicated problem of large scale, project-wide code migrations — a task which Enterprise would spend millions of pounds on. Beginning with migrating a Next.js web app serving tens of thousands of daily users.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Automating code migration would provide a significantly faster and lower cost alternative to manual code migration efforts. Additionally, it unlocks an entirely new way of developing altogether: developers can continue writing code in the languages they know and are familiar with, and the AI agent can migrate ‘just in time’ before deployment to optimise performance. That is what is done in this case after successful deployment of the agent into CI/CD pipelines.
Prior to the use of this agent, developers had manually optimised a slow route to reduce the time to first byte (TTFB) or user experienced latency. Manual optimisation reduced TTFB from ~4000ms to 2485ms. Migrating the route from JavaScript to Go, using the agent, resulted in a 55% improvement to just 1115ms — a far greater improvement than was possible from manually optimising.
Additionally, the most demanded API route simply returned cached data or requested it from an external API. This function written in JavaScript used 326mb of memory, while the Go alternative used just 32mb, an improvement of more than 90.2%.
In this case, agents were able to effectively migrate all API routes and dependencies of the production web application, making the slowest API route almost 2.5x faster while using far fewer compute resources. The full case study, including average results, and site-wide improvements is available to read on the Corithos website.
The future
AI agents are already able to automate highly complicated, expensive and high-skill tasks as showcased in the example. Further, AI agents are progressing rapidly; they are already evolving to include complex embeddings of data for better retrieval and context understanding, and multi agent systems are being used to simulate entire workforces rather than the simplistic single agent workflow presented in this article.
In your own work, AI agents can be used to automate most tasks. As professionals, our job is to identify where the most value can be added. Rather than using AI to ‘vibe code’ — using AI generated code with mounting technical debt — let’s use this technology to solve the most time consuming, high-leverage opportunities possible.