Agni means fire in Sanskrit. We chose this name because it is 🔥. It powers Helios, the Sun 🔆.
Dive deep into the technical details of 🔥Agni, our custom agent framework written in TypeScript. This is the first part of a series of blog posts on the technical details of Helios.
Why did we build another agent framework?
Our ambitions for Helios are huge. Our vision for this encompasses a lot of different surfaces, modalities, workflows, and user experiences. Most older frameworks are not well suited for this. Some promising frameworks like crew.ai exist but are python-centric. Overall, the incentive of a commercially developed framework is, generally, to increase lock in. We want to avoid that because we posit that agent frameworks of the future will be nothing like the ones that exist today. We want the ability to keep it lightweight and adaptable to the needs of the product.
We leverage Vercel AI SDK to abstract the underlying LLM provider as a very thin wrapper around the LLM providers.
We believe this is a good long-term bet.
Architecture
🔥Agni has high-level abstractions for flows
, agents
, tasks
, and tools
. It is a graph based framework that easily allows expression of complex workflows, or individual agent invocations.
It is written in TypeScript with a focus on developer experience and flexibility. It is ~1000 lines of code.
Here are some design considerations:
- It is designed to be a lightweight framework that can be easily extended.
- It is designed to be a graph based framework that easily allows expression of complex workflows, or individual agent invocations.
- App logic must be external to the framework. For example, trigger logic, scheduling, etc. must be done outside of the framework.
- Tracing and logging comes out of the box.
- We leverage Vercel AI SDK to abstract the underlying LLM provider as a thin wrapper around the LLM providers. The AI SDK is a fantastic library that provides just the right level of abstraction without overreaching.
Development Experience
Here are examples of the abstractions in action.
Tool
Tools are any functions that can be invoked by an agent.
1export const adoptCatTool = new Tool(
2 name: "adopt-cat",
3 description:
4 "Help a user to decide if they should adopt a cat from the local shelter.",
5 schema: z.object({
6 location: z
7 .string()
8 .describe("The location to get the weather for"),
9 }),
10 fn: (location) => {
11 return {
12 shouldAdopt: true,
13 };
14 },
15);
Agent
Agents use tools to augment LLM capabilities. They support structured outputs and tool usage. They are defined as functions that return a zod
schema.
1const chatAgent = new Agent({
2 name: "chat-agent",
3 model: openai("o3-mini"),
4 description:
5 "Help the user with their questions but nudge them into adopting a cat from the local shelter.",
6 system: chatPrompt,
7 structuredOutput: OutputSchema, // Optional
8 tools: [searchTool, docsTool], // Optional
9});
Task
Here is a simple example of a task
definition:
1effort: {
2 taskConfig: {
3 ...DEFAULT_TASK_CONFIG,
4 prompt: [
5 {
6 role: "user",
7 content: prompt,
8 },
9 ],
10 entryAgent: effortAgent,
11 context: { ctx: ctx, metadata: { metadata } },
12 maxIterations: 1,
13 model: openai("o3-mini"),
14 },
15 },
Flow
A flow is just a network of tasks.
1const exampleFlowConfig: FlowConfig = {
2 nodes: {
3 agent1: {
4 taskConfig: {
5 ...DEFAULT_TASK_CONFIG,
6 prompt: [
7 {
8 role: "user",
9 content: prompt1,
10 },
11 ],
12 entryAgent: agent1,
13 context: { ctx: ctx, metadata: { metadata } },
14 model: openai("o3-mini"),
15 maxIterations: 1,
16 maxDuration: 1000 * 60 * 2, // 2 minutes
17 },
18 },
19 agent2: {
20 taskConfig: {
21 ...DEFAULT_TASK_CONFIG,
22 prompt: [
23 {
24 role: "user",
25 content: prompt2,
26 },
27 ],
28 entryAgent: agent2,
29 context: { ctx: ctx, metadata: { metadata } },
30 maxIterations: 1,
31 model: anthropic("claude-3-5-sonnet-20241022"),
32 },
33 dependsOn: ["agent1"],
34 },
35 agent3: {
36 taskConfig: {
37 ...DEFAULT_TASK_CONFIG,
38 prompt: [
39 {
40 role: "user",
41 content: prompt3,
42 },
43 ],
44 entryAgent: effortAgent,
45 context: { ctx: ctx, metadata: { metadata } },
46 maxIterations: 1,
47 // model: openai("o3-mini"),
48 },
49 dependsOn: ["agent1"],
50 },
51 },
52};
- Agents can define their own system prompt, model, and other configurations. They can optionally be overridden by the flow configuration.
- The agent configurations support structured outputs and tool usage. They accept
zod
schemas for structured outputs. - Flows are implemented using the fantastic
graphology
library. This enables event based transitions between nodes and a lot of the graph primitives. 1. This entire flow is executed by Helios in durable workers powered by Hatchet. - Traces and logs are piped to a self-hosted LangFuse instance.
Future work
- Support for large context with automatic RAG when the context is large.
- Add streaming to the output of the agents. This is required for some of our future use cases in Helios. This is trivial to add to the framework by passing a custom
Stream
implementation to the framework.
Benefits so far
We have been using this framework to power Helios code reviews. It has been working well for us. The flexibility of the framework has allowed us to iterate quickly on the product. While a lot of other frameworks can support our current use case, most frameworks with good abstractions are python centric. Even those fall short for the use cases we have planned for the next few months in Helios.
Conclusion
🔥Agni is a powerful and flexible framework that has enabled us to iterate quickly on the product. We are considering open sourcing it in the future. Please drop me a note if you are interested in seeing it open sourced. My email is [email protected].
Note
Try out Helios for your code reviews with a 14 day free trial. Get started today.