Skip to content
/ Cogenix Public template

Cogenix is an intelligent conversational AI platform designed to provide seamless interactions with local AI models through Ollama. Built with modern web technologies, it offers a production-ready solution for deploying AI chat interfaces with enterprise-grade features.

Notifications You must be signed in to change notification settings

codewithashim/Cogenix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Cogenix - AI Chat Interface with Memory & Context

A modern, feature-rich AI chat application built with Next.js 15, featuring persistent conversations, intelligent memory context, and seamless Ollama integration.

Next.js React TypeScript MongoDB License


πŸ“‹ Table of Contents


🎯 About The Project

Cogenix is an intelligent conversational AI platform designed to provide seamless interactions with local AI models through Ollama. Built with modern web technologies, it offers a production-ready solution for deploying AI chat interfaces with enterprise-grade features.

Purpose

The primary goal of Cogenix is to:

  • Democratize AI Access: Provide an intuitive interface for interacting with locally-run AI models via Ollama
  • Preserve Context: Maintain conversation history and context across sessions using MongoDB persistence
  • Enable Organization: Offer thread-based conversation management for better workflow organization
  • Enhance User Experience: Deliver real-time streaming responses with a beautiful, responsive UI
  • Ensure Privacy: Keep all conversations local and secure with self-hosted infrastructure

Use Cases

  • Personal AI Assistant: Daily tasks, brainstorming, and research
  • Development Tool: Code assistance, debugging, and technical documentation
  • Learning Platform: Educational conversations and knowledge exploration
  • Enterprise Solution: Internal AI-powered support and documentation systems

✨ Features

Core Functionality

  • πŸ’¬ Real-time Streaming Chat: Experience AI responses as they're generated with server-sent events
  • 🧠 Intelligent Memory Context: AI remembers and references previous conversations
  • πŸ—‚οΈ Thread Management: Organize conversations into separate threads with persistent storage
  • 🎨 Advanced Theme System:
    • Light Mode - Clean, professional interface
    • Dark Mode - Eye-friendly, reduced strain
    • System Mode - Automatically syncs with OS preferences
  • πŸ“Š Token Statistics: Track prompt and completion tokens for usage monitoring
  • πŸ”„ Data Persistence: All conversations securely stored in MongoDB
  • 🎯 Model Selection: Switch between different Ollama models on-the-fly
  • ⚑ Performance Optimized: Built with Next.js 15 Turbopack for lightning-fast development
  • 🎭 Type-Safe: Full TypeScript implementation for robust code quality

Technical Features

  • Server-Sent Events (SSE) for real-time streaming
  • React Query for efficient state management and caching
  • Feature-based architecture for scalability
  • Responsive design with Tailwind CSS 4
  • MongoDB with Mongoose ODM for data modeling
  • RESTful API architecture
  • Error handling and retry logic
  • Connection pooling and optimization

πŸ› οΈ Tech Stack

Frontend

Backend

  • Database: MongoDB - NoSQL document database
  • ODM: Mongoose 8.19.1 - MongoDB object modeling
  • AI Backend: Ollama - Local AI model runtime
  • API: Next.js API Routes - Serverless API endpoints

Development Tools

  • Build Tool: Turbopack - Next.js native bundler
  • Package Manager: npm/yarn/pnpm - Dependency management
  • Code Quality: ESLint, Prettier - Code formatting and linting

πŸ“‹ Prerequisites

Before you begin, ensure you have the following installed:

Required Software

  1. Node.js (v20.x or higher)

    # Check version
    node --version
    # Should output: v20.x.x or higher
  2. npm (v10.x or higher) or yarn or pnpm

    npm --version
  3. MongoDB (v6.0 or higher)

  4. Ollama (Latest version)

    # Verify Ollama installation
    ollama --version
    
    # Pull a model (e.g., llama3)
    ollama pull llama3
    
    # Verify model is available
    ollama list

Optional but Recommended

  • Git: For version control
  • VS Code: Recommended IDE with TypeScript support
  • MongoDB Compass: GUI for MongoDB database management

πŸš€ Installation

Step 1: Clone the Repository

# Using HTTPS
git clone https://github.com/yourusername/cogenix.git

# Or using SSH
git clone git@github.com:yourusername/cogenix.git

# Navigate to project directory
cd cogenix

Step 2: Install Dependencies

# Using npm
npm install

# Using yarn
yarn install

# Using pnpm
pnpm install

This will install all required dependencies listed in package.json.


βš™οΈ Configuration

Step 1: Environment Variables

Create your environment configuration file:

# Copy the template
cp env.template .env.local

Step 2: Configure Environment Variables

Edit .env.local with your actual values:

# =================================
# Backend Configuration
# =================================

# Ollama Backend URL
OLLAMA_URL=http://localhost:11434

# =================================
# Model Configuration
# =================================

# Default AI model (must be pulled in Ollama)
DEFAULT_MODEL=llama3

# =================================
# Database Configuration
# =================================

# MongoDB Connection String
# Local MongoDB:
MONGODB_URI=mongodb://localhost:27017/cogenix

# Or MongoDB Atlas (recommended for production):
# MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/cogenix?retryWrites=true&w=majority

# =================================
# Application Settings
# =================================

# Node environment
NODE_ENV=development

# =================================
# Optional: Additional Configuration
# =================================

# Uncomment if using external AI services
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...

# Uncomment if implementing authentication
# JWT_SECRET=your-super-secret-key-here

# Public variables (exposed to browser)
# NEXT_PUBLIC_APP_URL=http://localhost:3000

Step 3: Verify Ollama Setup

Ensure Ollama is running and has models available:

# Start Ollama (if not already running)
ollama serve

# In another terminal, verify connection
curl http://localhost:11434/api/tags

# Pull required models
ollama pull llama3      # Recommended default
ollama pull mistral     # Alternative option
ollama pull codellama   # For code-focused tasks

Step 4: Verify MongoDB Connection

# If using local MongoDB, ensure it's running
mongosh

# Or test MongoDB Atlas connection
mongosh "your_mongodb_connection_string"

πŸƒ Running the Application

Development Mode

Start the development server with hot-reload:

# Using npm
npm run dev

# Using yarn
yarn dev

# Using pnpm
pnpm dev

The application will be available at:

Production Mode

Build and start the production server:

# Build the application
npm run build

# Start production server
npm run start

Verify Everything is Working

  1. Open the application: Navigate to http://localhost:3000
  2. Check Ollama connection: You should see available models in the UI
  3. Send a test message: Type "Hello" and verify you get a response
  4. Check thread persistence: Refresh the page and verify your conversation is saved

πŸ“ Project Structure

cogenix/
β”œβ”€β”€ public/                      # Static assets
β”‚   β”œβ”€β”€ file.svg
β”‚   β”œβ”€β”€ globe.svg
β”‚   └── ...
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/                     # Next.js App Router
β”‚   β”‚   β”œβ”€β”€ api/                 # API Routes
β”‚   β”‚   β”‚   β”œβ”€β”€ chat/           # Chat endpoints
β”‚   β”‚   β”‚   β”‚   └── route.ts    # POST /api/chat
β”‚   β”‚   β”‚   β”œβ”€β”€ memory/         # Memory management
β”‚   β”‚   β”‚   β”‚   └── clear/
β”‚   β”‚   β”‚   β”‚       └── route.ts
β”‚   β”‚   β”‚   β”œβ”€β”€ models/         # Model listing
β”‚   β”‚   β”‚   β”‚   └── route.ts
β”‚   β”‚   β”‚   └── threads/        # Thread management
β”‚   β”‚   β”‚       β”œβ”€β”€ route.ts    # GET/POST threads
β”‚   β”‚   β”‚       └── [id]/
β”‚   β”‚   β”‚           β”œβ”€β”€ route.ts
β”‚   β”‚   β”‚           └── messages/
β”‚   β”‚   β”‚               └── route.ts
β”‚   β”‚   β”œβ”€β”€ chat/               # Chat pages
β”‚   β”‚   β”‚   └── [threadId]/
β”‚   β”‚   β”‚       └── page.tsx
β”‚   β”‚   β”œβ”€β”€ layout.tsx          # Root layout
β”‚   β”‚   β”œβ”€β”€ page.tsx            # Home page
β”‚   β”‚   └── providers.tsx       # Context providers
β”‚   β”‚
β”‚   β”œβ”€β”€ components/             # Shared components
β”‚   β”‚   β”œβ”€β”€ index.ts
β”‚   β”‚   └── SettingsModal.tsx
β”‚   β”‚
β”‚   β”œβ”€β”€ config/                 # Configuration
β”‚   β”‚   β”œβ”€β”€ database.ts         # MongoDB connection
β”‚   β”‚   β”œβ”€β”€ env.ts              # Environment variables
β”‚   β”‚   └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ constants/              # App constants
β”‚   β”‚   β”œβ”€β”€ api.ts              # API endpoints
β”‚   β”‚   └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ contexts/               # React contexts
β”‚   β”‚   β”œβ”€β”€ ThemeContext.tsx    # Theme management
β”‚   β”‚   └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ features/               # Feature modules
β”‚   β”‚   └── chat/               # Chat feature
β”‚   β”‚       β”œβ”€β”€ components/     # Chat components
β”‚   β”‚       β”‚   β”œβ”€β”€ controls/
β”‚   β”‚       β”‚   β”‚   └── ModelSelector.tsx
β”‚   β”‚       β”‚   β”œβ”€β”€ layout/
β”‚   β”‚       β”‚   β”‚   β”œβ”€β”€ ChatContainer.tsx
β”‚   β”‚       β”‚   β”‚   └── ChatContainerWithPersistence.tsx
β”‚   β”‚       β”‚   β”œβ”€β”€ messages/
β”‚   β”‚       β”‚   β”‚   β”œβ”€β”€ ChatInput.tsx
β”‚   β”‚       β”‚   β”‚   β”œβ”€β”€ MemoryDisplay.tsx
β”‚   β”‚       β”‚   β”‚   └── MessageBubble.tsx
β”‚   β”‚       β”‚   └── sidebar/
β”‚   β”‚       β”‚       └── ThreadSidebar.tsx
β”‚   β”‚       β”œβ”€β”€ hooks/          # Custom React hooks
β”‚   β”‚       β”‚   β”œβ”€β”€ useChat.ts
β”‚   β”‚       β”‚   β”œβ”€β”€ useMemory.ts
β”‚   β”‚       β”‚   β”œβ”€β”€ useModels.ts
β”‚   β”‚       β”‚   β”œβ”€β”€ useThread.ts
β”‚   β”‚       β”‚   └── useThreads.ts
β”‚   β”‚       β”œβ”€β”€ models/         # Data models
β”‚   β”‚       β”‚   └── Thread.ts   # MongoDB Thread model
β”‚   β”‚       β”œβ”€β”€ services/       # API services
β”‚   β”‚       β”‚   β”œβ”€β”€ chat.service.ts
β”‚   β”‚       β”‚   β”œβ”€β”€ memory.service.ts
β”‚   β”‚       β”‚   β”œβ”€β”€ model.service.ts
β”‚   β”‚       β”‚   └── thread.service.ts
β”‚   β”‚       └── types/          # TypeScript types
β”‚   β”‚           β”œβ”€β”€ database.ts
β”‚   β”‚           └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ lib/                    # Utility libraries
β”‚   β”‚   β”œβ”€β”€ axios.ts            # Axios configuration
β”‚   β”‚   └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ styles/                 # Global styles
β”‚   β”‚   └── globals.css
β”‚   β”‚
β”‚   └── types/                  # Global TypeScript types
β”‚       └── api.ts
β”‚
β”œβ”€β”€ .env.local                  # Environment variables (create from template)
β”œβ”€β”€ env.template                # Environment template
β”œβ”€β”€ next.config.ts              # Next.js configuration
β”œβ”€β”€ package.json                # Dependencies and scripts
β”œβ”€β”€ postcss.config.mjs          # PostCSS configuration
β”œβ”€β”€ tailwind.config.js          # Tailwind CSS configuration
β”œβ”€β”€ tsconfig.json               # TypeScript configuration
└── README.md                   # This file

Architecture Overview

Feature-Based Architecture

The project uses a feature-based architecture where each feature (like chat) contains:

  • Components: UI components specific to the feature
  • Hooks: Custom React hooks for business logic
  • Services: API communication layer
  • Types: TypeScript type definitions
  • Models: Database schemas and models

This structure provides:

  • Modularity: Easy to add/remove features
  • Scalability: Clear separation of concerns
  • Maintainability: Related code stays together
  • Testability: Isolated units for testing

πŸ“‘ API Documentation

Base URL

http://localhost:3000/api

Endpoints

1. Chat Endpoints

Send Message
POST /api/chat
Content-Type: application/json

{
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "model": "llama3",
  "stream": true,
  "threadId": "optional-thread-id"
}

Response (Streaming):

data: {"content": "Hello", "done": false}
data: {"content": "! I'm", "done": false}
data: {"content": " doing well", "done": false}
data: [DONE]

Response (Non-Streaming):

{
  "content": "Hello! I'm doing well, thank you for asking.",
  "model": "llama3",
  "context": [],
  "tokens": {
    "prompt": 10,
    "completion": 20,
    "total": 30
  }
}

2. Model Endpoints

List Available Models
GET /api/models

Response:

{
  "models": [
    {
      "name": "llama3",
      "size": "7B",
      "modified_at": "2024-01-15T10:30:00Z"
    }
  ]
}

3. Thread Endpoints

List Threads
GET /api/threads

Response:

{
  "threads": [
    {
      "_id": "thread-id",
      "title": "Conversation Title",
      "aiModel": "llama3",
      "messages": [...],
      "createdAt": "2024-01-15T10:30:00Z",
      "updatedAt": "2024-01-15T11:00:00Z"
    }
  ]
}
Create Thread
POST /api/threads
Content-Type: application/json

{
  "title": "New Conversation",
  "aiModel": "llama3"
}
Get Thread Messages
GET /api/threads/:id/messages
Delete Thread
DELETE /api/threads/:id

4. Memory Endpoints

Clear Memory
POST /api/memory/clear

🀝 Contributing

We welcome contributions from the community! Here's how you can help make Cogenix better.

Development Workflow

  1. Fork the Repository

    # Click the 'Fork' button on GitHub
  2. Clone Your Fork

    git clone https://github.com/your-username/cogenix.git
    cd cogenix
  3. Create a Feature Branch

    git checkout -b feature/your-feature-name
    # or
    git checkout -b fix/your-bug-fix
  4. Make Your Changes

    • Write clean, maintainable code
    • Follow the existing code style
    • Add comments for complex logic
    • Update documentation if needed
  5. Test Your Changes

    # Ensure the application runs without errors
    npm run dev
    
    # Test all affected features manually
    # TODO: Add automated tests when available
  6. Commit Your Changes

    git add .
    git commit -m "feat: add amazing new feature"

    Commit Message Convention:

    • feat: - New feature
    • fix: - Bug fix
    • docs: - Documentation changes
    • style: - Code style changes (formatting)
    • refactor: - Code refactoring
    • test: - Adding tests
    • chore: - Maintenance tasks
  7. Push to Your Fork

    git push origin feature/your-feature-name
  8. Create a Pull Request

    • Go to the original repository
    • Click "New Pull Request"
    • Select your fork and branch
    • Fill in the PR template
    • Wait for review

Code Style Guidelines

TypeScript/JavaScript

// βœ… Good
interface ChatMessage {
  role: 'user' | 'assistant';
  content: string;
  timestamp: Date;
}

// βœ… Use descriptive names
const fetchUserMessages = async (userId: string) => {
  // Implementation
};

// ❌ Avoid
const a = async (b: string) => {
  // Implementation
};

React Components

// βœ… Good - Use functional components with TypeScript
interface ButtonProps {
  label: string;
  onClick: () => void;
  variant?: 'primary' | 'secondary';
}

export const Button: React.FC<ButtonProps> = ({ 
  label, 
  onClick, 
  variant = 'primary' 
}) => {
  return (
    <button onClick={onClick} className={`btn-${variant}`}>
      {label}
    </button>
  );
};

File Organization

  • One component per file
  • Co-locate related files (component + styles + tests)
  • Use index.ts for clean exports
  • Keep files under 300 lines when possible

Areas We Need Help

  • πŸ§ͺ Testing: Setting up Jest and React Testing Library
  • πŸ“± Mobile Optimization: Improving mobile responsiveness
  • 🌐 Internationalization: Adding multi-language support
  • β™Ώ Accessibility: Improving ARIA labels and keyboard navigation
  • πŸ“š Documentation: Improving inline code documentation
  • πŸ› Bug Fixes: Check the Issues page
  • ✨ Features: See Feature Requests

Reporting Bugs

When reporting bugs, please include:

  1. Description: Clear description of the issue
  2. Steps to Reproduce: Detailed steps to reproduce the bug
  3. Expected Behavior: What should happen
  4. Actual Behavior: What actually happens
  5. Environment:
    • OS: (e.g., Windows 11, macOS 14, Ubuntu 22.04)
    • Node version: (e.g., v20.10.0)
    • Browser: (e.g., Chrome 120, Firefox 121)
    • Ollama version: (e.g., 0.1.20)
  6. Screenshots: If applicable
  7. Logs: Console errors or server logs

Feature Requests

We love new ideas! When suggesting features:

  1. Use Case: Explain why this feature is needed
  2. Description: Detailed description of the feature
  3. Mockups: Wireframes or mockups if applicable
  4. Alternatives: Alternative solutions you've considered

Code Review Process

All PRs go through review:

  1. Automated Checks: Linting and build checks (coming soon)
  2. Code Review: At least one maintainer reviews the code
  3. Testing: Manual testing of the feature
  4. Approval: PR is merged after approval

πŸ› Troubleshooting

Common Issues

Issue: "Cannot connect to Ollama"

Solution:

# 1. Verify Ollama is running
curl http://localhost:11434/api/tags

# 2. If not running, start Ollama
ollama serve

# 3. Check if the port is correct in .env.local
OLLAMA_URL=http://localhost:11434

Issue: "MongoDB connection error"

Solution:

# 1. Check if MongoDB is running
mongosh

# 2. Verify connection string in .env.local
MONGODB_URI=mongodb://localhost:27017/cogenix

# 3. If using MongoDB Atlas, check:
#    - IP whitelist includes your IP
#    - Username and password are correct
#    - Database name is correct

Issue: "Module not found" errors

Solution:

# 1. Delete node_modules and package-lock.json
rm -rf node_modules package-lock.json

# 2. Clear npm cache
npm cache clean --force

# 3. Reinstall dependencies
npm install

Issue: "Port 3000 already in use"

Solution:

# Option 1: Kill the process using port 3000
# On macOS/Linux:
lsof -ti:3000 | xargs kill -9

# On Windows:
netstat -ano | findstr :3000
taskkill /PID <PID> /F

# Option 2: Use a different port
PORT=3001 npm run dev

Issue: Model not responding or very slow

Solution:

  • Check system resources (RAM, CPU)
  • Ensure you're using an appropriate model size for your hardware
  • Try a smaller model: ollama pull llama2:7b
  • Check Ollama logs: ollama logs

Getting Help

If you're still experiencing issues:

  1. Check existing issues: GitHub Issues
  2. Create a new issue: Use the bug report template
  3. Join discussions: GitHub Discussions

🚒 Deployment

Deploy to Vercel

Deploy with Vercel

  1. Click the button above or go to Vercel
  2. Import your repository
  3. Configure environment variables in Vercel dashboard:
    • MONGODB_URI: Your MongoDB Atlas connection string
    • OLLAMA_URL: Your hosted Ollama instance (or use a cloud provider)
    • DEFAULT_MODEL: Your default AI model
  4. Deploy

Note: For production, you'll need:

  • MongoDB Atlas (free tier available)
  • Hosted Ollama instance or alternative AI backend

Deploy to Other Platforms

The application can be deployed to any platform that supports Next.js:

  • Netlify
  • Railway
  • DigitalOcean App Platform
  • AWS Amplify
  • Google Cloud Run

See Next.js deployment documentation for platform-specific guides.


πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ‘₯ Contact

Project Maintainer: Ashim

Project Link: https://github.com/yourusername/cogenix

Issues: https://github.com/yourusername/cogenix/issues


πŸ™ Acknowledgments


πŸ“Š Project Status

Current Version: 0.1.0

Status: Active Development

Roadmap:

  • Add automated testing (Jest, React Testing Library)
  • Implement user authentication
  • Add support for multiple AI backends (OpenAI, Anthropic)
  • Create Docker deployment configuration
  • Add conversation export functionality
  • Implement voice input/output
  • Add plugin system for extensibility
  • Create mobile applications (React Native)

Made with ❀️ by the Cogenix Team

⭐ Star us on GitHub β€” it motivates us to keep improving!

About

Cogenix is an intelligent conversational AI platform designed to provide seamless interactions with local AI models through Ollama. Built with modern web technologies, it offers a production-ready solution for deploying AI chat interfaces with enterprise-grade features.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published