Myra is a private AI assistant built using Ollama, FastAPI, and Tailwind CSS. It provides a chat interface for users to interact with an AI model.
- Backend: FastAPI
 - Frontend: HTML, JavaScript, Tailwind CSS
 - AI Model: Ollama (llama2)
 
- 
Install dependencies:
pip install fastapi uvicorn httpx python-dotenv loguru - 
Install Ollama and run the llama2 model:
# Follow Ollama installation instructions from https://ollama.ai/ ollama run llama2 - 
Run the FastAPI server:
uvicorn src.voice_assistant.app:app --reload - 
Open a web browser and navigate to
http://localhost:8000to interact with Myra. 
src/voice_assistant/app.py: FastAPI backendstatic/index.html: Frontend HTML and JavaScriptREADME.md: This file
- Chat interface with AI-powered responses
 - Error handling and loading states
 - Responsive design using Tailwind CSS
 - Integration with Ollama for AI model processing
 - FastAPI backend for efficient API handling
 - CORS middleware for cross-origin requests
 - Health check endpoint for monitoring
 
- Type your message in the input field at the bottom of the page.
 - Press "Send" or hit Enter to send your message.
 - Myra will process your message and provide a response.
 - The conversation history is displayed in the chat window.
 
The application consists of a frontend (static HTML/JS) and a backend (FastAPI), with the following main components and flows:
- 
GET /- Serves the main 
index.htmlfile for the chat interface. - Response: 
FileResponse 
 - Serves the main 
 - 
POST /chat- Handles incoming chat messages.
 - Request Body: 
{ "message": string } - Response: 
StreamingResponse(Server-Sent Events)- Streams AI-generated responses back to the client.
 
 
 - 
GET /api/health- Performs a health check on the API.
 - Response: 
{ "status": string, "message": string } 
 
- 
Frontend loads:
- Client requests the root URL (
/). - Backend serves the 
index.htmlfile. 
 - Client requests the root URL (
 - 
User sends a chat message:
- Frontend sends a POST request to 
/chatwith the message. - Backend receives the message and forwards it to Ollama.
 - Ollama processes the message and generates a response.
 - Backend streams the response back to the frontend using Server-Sent Events.
 - Frontend displays the streamed response to the user.
 
 - Frontend sends a POST request to 
 - 
Health check:
- Client or monitoring service requests 
/api/health. - Backend responds with the current health status.
 
 - Client or monitoring service requests 
 
[Include instructions for setting up and running the application]
- FastAPI
 - Ollama
 - httpx
 - pydantic
 - loguru
 
The following diagram illustrates the system architecture, API routes, and request/response flow of the Myra AI Assistant:
graph TD
    subgraph Frontend
        A[Static HTML/JS]
    end
    subgraph Backend
        B[FastAPI App]
        C[Ollama API]
    end
    A -->|GET /| B
    B -->|FileResponse index.html| A
    A -->|POST /chat| B
    B -->|Forward message| C
    C -->|Generate response| B
    B -->|StreamingResponse SSE| A
    A -->|GET /api/health| B
    B -->|Health status| A
    classDef frontend fill:#f9f,stroke:#333,stroke-width:2px;
    classDef backend fill:#bbf,stroke:#333,stroke-width:2px;
    class A frontend;
    class B,C backend;
    +-------------------+        +-------------------+
|     Frontend      |        |      Backend      |
| (Static HTML/JS)  |        |    (FastAPI App)  |
+-------------------+        +-------------------+
         |                             |
         |                             |
         |  GET /                      |
         | ------------------------>   |
         |                             |
         |  FileResponse(index.html)   |
         | <------------------------   |
         |                             |
         |                             |
         |  POST /chat                 |
         |  {message: string}          |
         | ------------------------>   |
         |                             |
         |                             |
         |                     +----------------+
         |                     |  Ollama API    |
         |                     |  (LLM Service) |
         |                     +----------------+
         |                             |
         |                             |
         |  StreamingResponse          |
         |  (Server-Sent Events)       |
         | <------------------------   |
         |                             |
         |                             |
         |  GET /api/health            |
         | ------------------------>   |
         |                             |
         |  {status: string,           |
         |   message: string}          |
         | <------------------------   |
         |                             |
This is a development version and should not be used in production without proper security measures and optimizations.