A comprehensive coding guide to designing advanced round-robin-multi-agent work with Microsoft Autogen

In this tutorial, we demonstrated how Microsoft’s Autogen Framework gives developers to orchestrate complex, multi-agent workflows with minimal code. By utilizing Autogen’s RoundRobingRoupchat and Teamtool Abscractions, you can seamlessly gather specialist assistants, such as researchers, factuals, critics, summary and editors, to a coherent “Deepdive” tool. Autogenic deals with the Turn -Take, Terminating Terms and Streaming Output, allowing you to focus on defining each agent’s expertise and system asks rather than plumbing together withdrawal or manual fast chains. Whether done in in -depth research, validation of facts, refining prose or integration of third -party tools, Autogen provides a unified API that scales from simple two -agent -aging lines to elaborate, five -agent collaboration.

!pip install -q autogen-agentchat[gemini] autogen-ext[openai] nest_asyncio

We install the Autogen Agent Chat package with Gemini support, the Openai extension to API compatibility and Nest_asyncio library to patch the Notebooks Event Loop, ensuring that you have all the components needed to run asynchronous, multi-agent work in Colab.

import os, nest_asyncio
from getpass import getpass


nest_asyncio.apply()
os.environ["GEMINI_API_KEY"] = getpass("Enter your Gemini API key: ")

We import and use NEST_ASYNCIO to activate embedded event loops in notebook environments, and then surely ask for your Gemini API key using GetPass and save it in us.

from autogen_ext.models.openai import OpenAIChatCompletionClient


model_client = OpenAIChatCompletionClient(
    model="gemini-1.5-flash-8b",    
    api_key=os.environ["GEMINI_API_KEY"],
    api_type="google",
)

We initialize an Openai-compatible Chat client pointing to Google’s Gemini by specifying the Gemini-1.5-Flash-8B model, which injects your saved Gemini API key and setting of API_PEPE = “Google”, giving you a ready to use Model_Client to downstream Autogen Agent.

from autogen_agentchat.agents import AssistantAgent


researcher   = AssistantAgent(name="Researcher", system_message="Gather and summarize factual info.", model_client=model_client)
factchecker  = AssistantAgent(name="FactChecker", system_message="Verify facts and cite sources.",       model_client=model_client)
critic       = AssistantAgent(name="Critic",    system_message="Critique clarity and logic.",         model_client=model_client)
summarizer   = AssistantAgent(name="Summarizer",system_message="Condense into a brief executive summary.", model_client=model_client)
editor       = AssistantAgent(name="Editor",    system_message="Polish language and signal APPROVED when done.", model_client=model_client)

We define five specialized assistant agents, researcher, Factachecker, critic, summary and editor, each initialized with a role-specific system message and the shared Gemini-driven model client, enabling them to collect information, verify accuracy, criticism, condensation summaries and polish language in autogenic workflow.

from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination


max_msgs = MaxMessageTermination(max_messages=20)
text_term = TextMentionTermination(text="APPROVED", sources=["Editor"])
termination = max_msgs | text_term                                    
team = RoundRobinGroupChat(
    participants=[researcher, factchecker, critic, summarizer, editor],
    termination_condition=termination
)

We import the ROUNDROBINGROUPChat class along with two termination conditions and then compose a stop rule that fires after 20 total messages or when the editor -in -law mentions “approved”. Finally, it instantizes a round-robin team of the five specialized agents with the combined termination logic, enabling them to cycle through research, fact control, criticism, summary and editing until one of the stop conditions is met.

from autogen_agentchat.tools import TeamTool


deepdive_tool = TeamTool(team=team, name="DeepDive", description="Collaborative multi-agent deep dive")

We pack our ROUNDROBINGROUPCHAT team in a teamtool called “Deepdive” with a human readable description, effectively packing the entire multi-agent work aisle in a single callable tool that other agents can invoke seamlessly.

host = AssistantAgent(
    name="Host",
    model_client=model_client,
    tools=[deepdive_tool],
    system_message="You have access to a DeepDive tool for in-depth research."
)

We create a “host” assistant agent who is configured with the shared Gemini-driven Model_Client, it gives the Deepdive team tool to orchestrate in-depth research and prime it with a system message informing it about its ability to call the multi-agent Deepdive Workflow.

import asyncio


async def run_deepdive(topic: str):
    result = await host.run(task=f"Deep dive on: {topic}")
    print("🔍 DeepDive result:\n", result)
    await model_client.close()


topic = "Impacts of Model Context Protocl on Agentic AI"
loop = asyncio.get_event_loop()
loop.run_until_complete(run_deepdive(topic))

Finally, we define an asynchronous run_deepdive feature that tells the host agent to perform the Deepdive team tool on a given topic, prints the extensive result and then closes the model client; It then grabs Colab’s existing Asyncio -Loop and runs the corutine for completion for a trouble -free, synchronous execution.

Finally, the integration of Google Gemini via Autogen’s Openai -compatible client and wrapping our multi -agent -team as a Callable Team Tool gives us a strong template to build very modular and recyclable workflows. Autogen Abstracts Away Event Loop Management (with nest_asyncio), streaming response and termination logic, enabling us to iter quickly on agent roles and general orchestration. This advanced pattern streamlining the development of collaborative AI systems and lays the basis for expanding to fetch pipelines, dynamic voters or conditional execution strategies.


Check the laptop here. All credit for this research goes to the researchers in this project. You are also welcome to follow us on Twitter And don’t forget to join our 95k+ ml subbreddit and subscribe to Our newsletter.


Asif Razzaq is CEO of Marketchpost Media Inc. His latest endeavor is the launch of an artificial intelligence media platform, market post that stands out for its in -depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts over 2 million monthly views and illustrates its popularity among the audience.

Leave a Comment