A Coding Guide to Anemoi-Style Semi-Centralized Agentic Systems Using Peer-to-Peer Critic Loops in LangGraph
In this tutorial, we demonstrate how a semi-centralized Anemoi-style multi-agent system works by letting two peer agents negotiate directly without a manager or supervisor. We show how a Drafter and a Critic iteratively refine an output through peer-to-peer feedback, reducing coordination overhead while preserving quality. We implement this pattern end-to-end in Colab using LangGraph, focusing on clarity, control flow, and practical execution rather than abstract orchestration theory. Check out the FULL CODES here.
import os
import json
from getpass import getpass
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
if not os.environ.get(“OPENAI_API_KEY”):
os.environ[“OPENAI_API_KEY”] = getpass(“Enter OPENAI_API_KEY (hidden): “)
MODEL = os.environ.get(“OPENAI_MODEL”, “gpt-4o-mini”)
llm = ChatOpenAI(model=MODEL, temperature=0.2)
We set up the Colab environment by installing the required LangGraph and LangChain packages and securely collecting the OpenAI API key as a hidden input. We initialize the language model that will be shared by all agents, keeping the configuration minimal and reproducible. Check out the FULL CODES here.
task: str
max_rounds: int
round: int
draft: str
critique: str
agreed: bool
final: str
trace: bool
We define a typed state that acts as the shared communication surface between agents during negotiation. We explicitly track the task, draft, critique, agreement flag, and iteration count to keep the flow transparent and debuggable. This state obviates the need for a central manager or for implicit memory. Check out the FULL CODES here.
You write a high-quality solution to the user’s task.
If you receive critique, you revise decisively and incorporate it.
Return only the improved draft text.”””
def drafter_node(state: AnemoiState) -> AnemoiState:
task = state[“task”]
critique = state.get(“critique”, “”).strip()
r = state.get(“round”, 0) + 1
if critique:
user_msg = f”””TASK:
{task}
CRITIQUE:
{critique}
Revise the draft.”””
else:
user_msg = f”””TASK:
{task}
Write the first draft.”””
draft = llm.invoke(
[
{“role”: “system”, “content”: DRAFTER_SYSTEM},
{“role”: “user”, “content”: user_msg},
]
).content.strip()
if state.get(“trace”, False):
print(f”\n— Drafter Round {r} —\n{draft}\n”)
return {**state, “round”: r, “draft”: draft, “agreed”: False}
We implement the Drafter agent, which produces the initial response and revises it whenever peer feedback is available. We keep the Drafter focused purely on improving the user-facing draft, without awareness of control logic or termination conditions. It mirrors the Anemoi idea of agents optimizing locally while observing peer signals. Check out the FULL CODES here.
Return strict JSON:
{“agree”: true/false, “critique”: “…”}”””
def critic_node(state: AnemoiState) -> AnemoiState:
task = state[“task”]
draft = state.get(“draft”, “”)
raw = llm.invoke(
[
{“role”: “system”, “content”: CRITIC_SYSTEM},
{
“role”: “user”,
“content”: f”TASK:\n{task}\n\nDRAFT:\n{draft}”,
},
]
).content.strip()
cleaned = raw.strip(““`”).replace(“json”, “”).strip()
try:
data = json.loads(cleaned)
agree = bool(data.get(“agree”, False))
critique = str(data.get(“critique”, “”)).strip()
except Exception:
agree = False
critique = raw
if state.get(“trace”, False):
print(f”— Critic Decision —\nAGREE: {agree}\n{critique}\n”)
final = draft if agree else state.get(“final”, “”)
return {**state, “agreed”: agree, “critique”: critique, “final”: final}
We implement the Critic agent, which evaluates the draft and decides whether it is ready to ship or needs revision. We enforce a strict agree-or-revise decision to avoid vague feedback and ensure fast convergence. This peer evaluation step allows quality control without introducing a supervisory agent. Check out the FULL CODES here.
if state.get(“agreed”, False):
return “end”
if state.get(“round”, 0) >= state.get(“max_rounds”, 3):
return “force_ship”
return “loop”
def force_ship_node(state: AnemoiState) -> AnemoiState:
return {**state, “final”: state.get(“final”) or state.get(“draft”, “”)}
graph = StateGraph(AnemoiState)
graph.add_node(“drafter”, drafter_node)
graph.add_node(“critic”, critic_node)
graph.add_node(“force_ship”, force_ship_node)
graph.set_entry_point(“drafter”)
graph.add_edge(“drafter”, “critic”)
graph.add_conditional_edges(
“critic”,
continue_or_end,
{“loop”: “drafter”, “force_ship”: “force_ship”, “end”: END},
)
graph.add_edge(“force_ship”, END)
anemoi_critic_loop = graph.compile()
demo_task = “””Explain the Anemoi semi-centralized agent pattern and why peer-to-peer critic loops reduce bottlenecks.”””
result = anemoi_critic_loop.invoke(
{
“task”: demo_task,
“max_rounds”: 3,
“round”: 0,
“draft”: “”,
“critique”: “”,
“agreed”: False,
“final”: “”,
“trace”: False,
}
)
print(“\n====================”)
print(“✅ FINAL OUTPUT”)
print(“====================\n”)
print(result[“final”])
We assemble the LangGraph workflow that routes control between Drafter and Critic until agreement is reached or the maximum round limit is reached. We rely on simple conditional routing rather than centralized planning, thereby preserving the system’s semi-centralized nature. Finally, we execute the graph and return the best available output to the user.
In conclusion, we demonstrated that Anemoi-style peer negotiation is a practical alternative to manager-worker architectures, offering lower latency, reduced context bloat, and simpler agent coordination. By allowing agents to monitor and correct each other directly, we achieved convergence with fewer tokens and less orchestration complexity. In this tutorial, we provided a reusable blueprint for building scalable, semi-centralized agent systems. It lays the foundation for extending the pattern to multi-peer meshes, red-team loops, or protocol-based agent interoperability.
Check out the FULL CODES here. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.












