Kondoo is not just a chatbot; it is a framework for building autonomous digital minds. Its name is inspired by the word “condominium,” a system of independent dwellings that share the same structure. Similarly, Kondoo allows multiple bots to operate independently, each with its own personality and knowledge base, but sharing the same robust, containerized framework.
This project was born with a “self-hosted first” philosophy, giving you complete control over your data and the models you use, from a local tinyllama to cloud APIs such as Gemini.
Kondoo: Your knowledge, your rules, your assistants.
- Framework Agnostic: Not tied to a specific provider. Use an
ANSWER_LLM_PROVIDERto choose your answer engine (Gemini, OpenAI, Ollama) and aKNOWLEDGE_PROVIDERfor your embeddings (Ollama, local, OpenAI). - Containerized by Design: Built on Podman and
compose, ensuring maximum portability and clean, repeatable deployment. - Self-Hosted First: Designed to run 100% locally, using Ollama for both embeddings and response generation, giving you full control and privacy.
- Flexible: Easily configure each bot's personality through a simple
personality.txtfile. - Extensible: The
src/structure makes it an installable Python package, ready to be imported into larger projects.
Kondoo is structured as a Python framework, separating reusable code from implementation examples:
src/kondoo/: The source code for thekondooframework (installable viapip).example/example_bot/: A complete and functional example bot that shows how to use the framework. This is your starting point.pyproject.toml: Defines the project and all its dependencies..env.example: A universal template with all available environment variables.
Try Kondoo in 5 minutes using the sample bot.
- Podman and
podman-compose. - Python 3.9+
- Your own Ollama service (local or remote) or an API Key (e.g., Google Gemini).
- SynapsIA to create the knowledge base.
git clone https://github.com/sysadminctl-services/kondoo.git
cd kondooNavigate to the example directory:
cd example/example_botCreate your personal configuration file from the root template:
cp ../../.env.example .envEdit the .env file and fill in the variables. For a 100% local test with Ollama:
# example/example_bot/.env
ANSWER_LLM_PROVIDER=ollama_compatible
KNOWLEDGE_PROVIDER=ollama
LLM_MODEL_NAME="tinyllama"
LLM_BASE_URL="http://host.containers.internal:11434/v1"
LLM_API_KEY="ollama"
EMBEDDING_MODEL_NAME="mxbai-embed-large"
OLLAMA_BASE_URL="http://host.containers.internal:11434"Create the directories for the documents and the knowledge base:
mkdir docs
mkdir knowledge
echo “Kondoo is a RAG chatbot framework.” > docs/info.txtUse SynapsIA to process your documents:
python synapsia.py --docs ../Kondoo/example/example_bot/docs/ --knowledge ../Kondoo/example/example_bot/knowledge/Return to the bot directory and run podman-compose:
# While in example/example_bot/
podman-compose up --buildOpen a new terminal and send a query using curl:
curl -X POST \
-H “Content-Type: application/json” \
-d ‘{“query”: “What is Kondoo?”}’ \
http://localhost:5000/queryYou should receive a JSON response generated by your local tinyllama.
All configuration variables are documented in the .env.example file. Variables are loaded from .env in your bot's directory (e.g., example/example_bot/.env).
These variables act as "switches" to choose which services to use.
ANSWER_LLM_PROVIDER: Choose your response (LLM) engine.gemini: (Cloud) Google Gemini (requiresLLM_API_KEY).openai: (Cloud) OpenAI (requiresLLM_API_KEY).ollama_compatible: (Self-Hosted) Any OpenAI-compatible API, like Ollama (requiresLLM_BASE_URLandLLM_MODEL_NAME).
KNOWLEDGE_PROVIDER: Choose your embeddings (knowledge) engine.ollama: (Self-Hosted) Use an Ollama service (requiresOLLAMA_BASE_URLandEMBEDDING_MODEL_NAME).local: (Local) Use a HuggingFace model on the CPU/GPU (requiresEMBEDDING_MODEL_NAME).openai: (Cloud) Use OpenAI's embeddings API (requiresLLM_API_KEY).
These are the "control knobs" required by the providers you selected above.
LLM_API_KEY:- Required by:
gemini,openai. - Description: Your secret API key for the chosen cloud service.
- Required by:
LLM_MODEL_NAME:- Required by:
gemini,openai,ollama_compatible. - Description: The specific model name to use for generating answers.
- Examples:
models/gemini-1.5-flash,gpt-4o,tinyllama.
- Required by:
LLM_BASE_URL:- Required by:
ollama_compatible. - Description: The full base URL of your self-hosted LLM's OpenAI-compatible API.
- Example (Ollama):
http://host.containers.internal:11434/v1
- Required by:
EMBEDDING_MODEL_NAME:- Required by:
ollama,local,openai. - Description: The specific model name to use for embeddings.
- Examples:
mxbai-embed-large,nomic-embed-text.
- Required by:
OLLAMA_BASE_URL:- Required by:
ollama(provider). - Description: The base URL of your Ollama service (the non-
/v1endpoint). - Example:
http://host.containers.internal:11434
- Required by:
These variables control the bot's identity and data paths.
BOT_PERSONALITY_FILE:- Description: The path inside the container to the text file that defines the bot's personality.
- Default:
/app/personality.txt(as set by theContainerfile).
KNOWLEDGE_DIR:- Description: The path inside the container where the bot will load its knowledge base from.
- Default:
/app/knowledge(as set by thecompose.yamlvolume).
This project is licensed under the MIT License. See the LICENSE file for more details.