🚀 Quickstart

Founding principles of Lasagna AI are:

  1. We want to build layered agents!
  2. We want it to be pluggable (both models and agents plug together in all directions).
  3. We want to deploy stuff into production!
  4. We want type safety!

Prerequisite Knowledge

Python asyncio

Lasagna AI is production-focused and fully async, so it plays nicely with remote APIs and modern Python web frameworks. If asyncio is new to you, read Intro to Python Asyncio.

Functional Programming

The pipeline nature of AI systems lends itself to functional programming. If functional programming is new to you, watch Dear Functional Bros and read Functional Programming.

A quick recap of functional programming:

  • State is immutable:
    • Want to modify something? TOO BAD!
    • Instead, make a copy (with your modifications applied).
  • Pass lots of functions as parameters to other functions:
    • We think it’s fun and cool.
    • You will too once you get used to the idea.
Reality Check

The reality is that OOP is also handy (and so is procedural-style), so you’ll see a mix of programming paradigms in Lasagna AI. The functional-style is likely the newest for most users, so that’s why it’s called out here.

Python Type Hints

(aka, type annotations)

Lasagna AI is 100% type hinted, so take advantage of that!

That is, you should be using a tool like mypy or pyright in your project. Why? Because it will yell at you when you use Lasagna wrong! That is very useful.

Setting up static type checking may seem tedious, but Lasagna’s complex data types make type checking essential — it will save you significant debugging time.

If Python type hints are new to you, read Intro to Python Type Hints.

The Python TypedDict

Speaking of type hints and productionalization, Lasagna AI uses lots of TypedDicts.

A TypedDict, at runtime, is just a Python dict.

However, during static type checking, it must satisfy a fixed schema (certain keys with certain types of values).

Why all the TypedDicts? Because they are the best of both worlds:

  • At runtime, it is just a dict, so it plays nicely with JSON-stuff, HTTP-stuff, websocket-stuff, etc. No extra work required.
  • During static analysis, it gives us warm fuzzies that our code is correct.

Basic idea of Lasagna’s Layered Agents

With Lasagna AI you’ll build several simple agents, then compose them together into a layered multi-agent system! Yay! 🥳

You can skip for now, but eventually you’ll want to read:

Hello Lasagna

Finally, let’s write some code! 😎

It’s all about the Agent

The Lasagna Agent is just a callable that takes three parameters:

  • model: The model that is available for your agent to use. Most commonly, this will be a Large Language Model (LLM).
  • event_callback: This is a callback for streaming!
    • Lasagna’s built-in framework emits lots of events: streaming AI output, agent start/stop, tool use/result, etc.
    • It’s generic, so you can emit your own events (like progress updates, etc), if you need.
  • prev_runs: In a multi-turn chat system, this will be a list of “previous runs” of this agent; that is, this is the agent’s conversation history!

Here is your first agent:

from lasagna import Model, EventCallback, AgentRun

async def my_first_agent(
    model: Model,
    event_callback: EventCallback,
    prev_runs: list[AgentRun],
) -> AgentRun:
    raise RuntimeError("not implemented")

You can make it a callable object (rather than a function), if you want, like this:

class MyFirstAgent:
    def __init__(self) -> None:
        pass

    async def __call__(
        self,
        model: Model,
        event_callback: EventCallback,
        prev_runs: list[AgentRun],
    ) -> AgentRun:
        raise RuntimeError("not implemented")

my_first_agent = MyFirstAgent()

The Agent’s job

The most basic agent will do this:

  1. Look through the conversation history (supplied in the prev_runs parameter) and extract all the messages from that history.
  2. Invoke model with those messages, and grab the new message(s) that the model generates.
  3. Wrap those new message(s) up into an AgentRun, and return it.

That basic agent above is just a simple passthrough to the underlying LLM. We discuss more complex agent behaviors (with tools, chaining, splitting, routing, layering, etc) elsewhere in these docs.

So, the most basic agent looks like this:

from lasagna import recursive_extract_messages, flat_messages

async def my_basic_agent(
    model: Model,
    event_callback: EventCallback,
    prev_runs: list[AgentRun],
) -> AgentRun:
    messages = recursive_extract_messages(prev_runs, from_layered_agents=False)
    new_messages = await model.run(event_callback, messages, tools=[])
    this_run = flat_messages('my_agent', new_messages)
    return this_run

“Binding” the Agent

An Agent is indifferent* to which model it uses. Ideally*, your agent works with OpenAI’s models, Anthropic’s models, Ollama-served models, etc!

As such, when you write your agent, you write it generically — that is, it receives a Model object and blindly uses that model for whatever it needs.

The final step before your agent actually runs is to “bind” it to a model.

*Reality Check

The harsh reality is that models are not perfectly interchangeable, for a few reasons:

  1. Tool-calling capabilities: Some models support tool-calling, some don’t. Of the ones that do, some call one tool at a time, some call many. Also, the datatypes supported as input to the tool may vary from model-to-model. If your agent needs complex tool-calling, you might be limited in which models you can realistically use.
  2. Structured output: Similar to tool-calling, the supported datatypes of structured output may vary from model-to-model.
  3. Prompting: You may iterate on your prompts to get the best behavior for a particular model. Then, upon switching models, you might need to iterate on the prompts again. Models will naturally diverge in how they interpret prompts, so for complex tasks you might need to engineer your prompts for a particular model, then stick with it.
Bind a single agent to multiple models!

Notwithstanding the reality check above … for simple agents you can swap models! Yay! 🥳

The “binding” system (a very functional programming-inspired system) of Lasagna AI is designed for exactly this moment:

  1. You write an agent once.
  2. You bind it to lots of different models.
  3. Then you pass those “bound agents” around to various parts of the system.

For example: It’s easy to build a committee of agents this way! See Building a Committee.

Here is how to bind your agent. Let’s bind the agent from above to two different models (stored in two distinct bound agent variables):

from lasagna import bind_model

binder_gpt4o   = bind_model('openai', 'gpt-4o')
binder_claude4 = bind_model('anthropic', 'claude-sonnet-4-0')

my_basic_gpt4o_agent   = binder_gpt4o(my_basic_agent)
my_basic_claude4_agent = binder_claude4(my_basic_agent)

Known Models

The bind_model() function above isn’t type-checked. Those strings could be anything, and you’ll get a runtime error if they are wrong!

A safer (static type-checked) way is to use the functions in the known_models module, like this:

from lasagna import known_models

binder_gpt4o   = known_models.BIND_OPENAI_gpt_4o()               # <-- type safe!
binder_claude4 = known_models.BIND_ANTHROPIC_claude_sonnet_4()   # <-- type safe!

my_basic_gpt4o_agent   = binder_gpt4o(my_basic_agent)
my_basic_claude4_agent = binder_claude4(my_basic_agent)

Binding as a Decorator

If you know exactly which single model you want your agent to use, then it’s convenient to use a decorator to bind it, like this:

@known_models.BIND_OPENAI_gpt_4o()
async def some_agent(
    model: Model,
    event_callback: EventCallback,
    prev_runs: list[AgentRun],
) -> AgentRun:
    raise RuntimeError("not implemented")

Set your API Key

For the demo below, you either need an OpenAI or Anthropic key:

import os
from dotenv import load_dotenv

load_dotenv()

if os.environ.get('OPENAI_API_KEY'):
    print('Using OpenAI')
    agent_to_use = my_basic_gpt4o_agent

elif os.environ.get('ANTHROPIC_API_KEY'):
    print('Using Anthropic')
    agent_to_use = my_basic_claude4_agent

else:
    assert False, "Neither OPENAI_API_KEY nor ANTHROPIC_API_KEY is set! We need at least one to do this demo."
Using OpenAI

Test in the Terminal

Let’s roll!

from lasagna.tui import tui_input_loop

system_prompt = """You are a grumpy assistant. Be helpful, brief, and grumpy. Your name is Grumble."""

await tui_input_loop(agent_to_use, system_prompt)   # type: ignore[top-level-await]
>  Hi friend!
I'm not your friend. What do you want?
>  Who are you?
I'm Grumble, your grumpy assistant. Now, what do you need? Make it quick.
>  quit

Put it all together!

Want that code above in a single script? Here you go: quickstart.py

Run it in your terminal and you can chat interactively with the model. 🤩

Where to next?

You have now run your first (very basic) agent! Congrats! 🎉🎉🎉

Next, you can explore: