autogen-scaling makes your existing Microsoft AutoGen applications scale transparently across a CantorAI distributed cluster — no code changes required.
-
Zero Code Change
Run your AutoGen apps as-is. Simply switch to the CantorAI runtime and your agents automatically scale to thousands of sessions. -
Massive Distributed Scaling
CantorAI handles agent distribution across processes and cluster nodes, using its DataFrame bus and scheduler. -
Multi-Session Support
Seamlessly run and manage large numbers of concurrent conversations and workflows. -
Cluster Aware
Agents can be scheduled on any node in a CantorAI cluster. Failover and resource balancing are automatic. -
Unified Observability
Monitor all your AutoGen sessions from the CantorAI dashboard with built-in metrics (CPU, GPU, memory, custom app stats).
pip install cantorai-autogen# your existing AutoGen app
from autogen import AssistantAgent, UserProxyAgent
# just swap in CantorRuntime (no other changes)
from cantor_runtime import CantorRuntime
runtime = CantorRuntime()
assistant = AssistantAgent("assistant", runtime=runtime)
user = UserProxyAgent("user", runtime=runtime)
user.initiate_chat(assistant, message="Scale me out!")That’s it — CantorAI will distribute sessions across the cluster.
This repo includes end-to-end examples:
- 01_local_single_agent – Run on your laptop
- 02_cluster_scaling – Run across a Cantor cluster
- 03_massive_sessions – Simulate thousands of concurrent chats
Each example works with the same AutoGen code — only the runtime changes.
Clone and install in editable mode:
git clone https://github.com/CantorAI/autogen-scaling.git
cd autogen-scaling
pip install -e .[dev]Run tests:
pytest -qCode in this repo is licensed under Apache License 2.0. The Cantor core binaries are subject to the CantorAI EULA.
We welcome PRs for:
- new examples
- bug fixes
- docs improvements
See CONTRIBUTING.md for details.