The backend API for the Typelets Application - a secure, encrypted notes management system built with TypeScript, Hono, and PostgreSQL. Features end-to-end encryption support, file attachments, and folder organization.
- Features
- Tech Stack
- Prerequisites
- Local Development Setup
- Alternative Installation Methods
- Available Scripts
- API Endpoints
- Database Schema
- Security Features
- Environment Variables
- Monitoring with Sentry.io
- Development
- Docker Support
- Production Deployment
- Contributing
- License
- Acknowledgments
- 🔐 Secure Authentication via Clerk
- 📝 Encrypted Notes with client-side encryption support
- 📁 Folder Organization with nested folder support
- 📎 File Attachments with encrypted storage
- 🏷️ Tags & Search for easy note discovery
- 🗑️ Trash & Archive functionality
- 🔄 Real-time Sync via WebSockets for multi-device support
- ⚡ Fast & Type-Safe with TypeScript and Hono
- 🐘 PostgreSQL with Drizzle ORM
- 🚀 Valkey/Redis Caching for high-performance data access with cluster support
- 📊 Error Tracking & Monitoring with Sentry.io for observability and performance monitoring
- 💻 Code Execution via secure Judge0 API proxy
- 🛡️ Comprehensive Rate Limiting for HTTP, WebSocket, file uploads, and code execution
- 🏥 Health Checks with detailed system status and readiness probes
- 📈 Structured Logging with automatic event tracking and error capture
- Runtime: Node.js 22+ (LTS recommended)
- Framework: Hono - Fast, lightweight web framework
- Database: PostgreSQL with Drizzle ORM
- Cache: Valkey/Redis Cluster for high-performance caching
- Authentication: Clerk
- Validation: Zod
- Monitoring: Sentry.io for error tracking and performance monitoring
- Logging: Structured JSON logging with automatic error capture
- TypeScript: Strict mode enabled for type safety
- Node.js 22+ (LTS recommended)
- pnpm 9.15.0+
- PostgreSQL database (local installation or Docker)
- Clerk account for authentication (sign up here)
- Valkey/Redis cluster for caching (optional - improves performance)
- Sentry.io account for monitoring (optional - sign up here)
- Judge0 API key for code execution (optional - get from RapidAPI)
Recommended approach for development: PostgreSQL in Docker + API with npm for hot reload and easy debugging
- Clone and install dependencies:
git clone https://github.com/typelets/typelets-api.git
cd typelets-api
pnpm install
- Start PostgreSQL with Docker:
# Start PostgreSQL database for local development
docker run --name typelets-postgres \
-e POSTGRES_PASSWORD=devpassword \
-e POSTGRES_DB=typelets_local \
-p 5432:5432 -d postgres:15
- Set up environment variables:
cp .env.example .env
-
Configure environment variables:
- Create a free account at Clerk Dashboard
- Create a new application
- (Optional) Get Judge0 API key from RapidAPI
- Update
.env
with your settings:
CLERK_SECRET_KEY=sk_test_your_actual_clerk_secret_key_from_dashboard CORS_ORIGINS=http://localhost:5173,http://localhost:3000 # Optional: For code execution features JUDGE0_API_KEY=your_judge0_rapidapi_key_here
-
Set up database schema:
pnpm run db:push
- Start development server:
pnpm run dev
🎉 Your API is now running at http://localhost:3000
WebSocket connection available at: ws://localhost:3000
The development server will automatically restart when you make changes to any TypeScript files.
# Start/stop database
docker start typelets-postgres # Start existing container
docker stop typelets-postgres # Stop when done
# API development
pnpm run dev # Auto-restart development server
pnpm run build # Test production build
pnpm run lint # Check code quality
Development Features:
- ⚡ Auto-restart: Server automatically restarts when you save TypeScript files
- 📝 Terminal history preserved: See all your logs and errors
- 🚀 Fast compilation: Uses tsx with esbuild for quick rebuilds
# 1. Start PostgreSQL
docker run --name typelets-postgres -e POSTGRES_PASSWORD=devpassword -e POSTGRES_DB=typelets_local -p 5432:5432 -d postgres:15
# 2. Build and run API in Docker
docker build -t typelets-api .
docker run -p 3000:3000 --env-file .env typelets-api
If you prefer to install PostgreSQL locally instead of Docker:
- Install PostgreSQL on your machine
- Create database:
createdb typelets_local
- Update
.env
:DATABASE_URL=postgresql://postgres:your_password@localhost:5432/typelets_local
pnpm run dev
- Start development server with auto-restartpnpm run build
- Build for productionpnpm start
- Start production serverpnpm run lint
- Run ESLintpnpm run format
- Format code with Prettierpnpm run db:generate
- Generate database migrationspnpm run db:push
- Apply database schema changespnpm run db:studio
- Open Drizzle Studio for database management
📚 Complete API documentation with interactive examples: https://api.typelets.com/docs (Swagger/OpenAPI)
The API provides comprehensive REST endpoints for:
- Users - Profile management and account deletion
- Folders - Hierarchical folder organization with nested support
- Notes - Full CRUD with encryption support, pagination, filtering, and search
- File Attachments - Encrypted file uploads and downloads
- Code Execution - Secure Judge0 API proxy for running code in multiple languages
- Health Checks - System health checks and status monitoring
Endpoint | Description |
---|---|
GET / |
API information and version |
GET /health |
Enhanced health check with system status |
GET /websocket/status |
WebSocket server statistics |
All /api/*
endpoints require authentication via Bearer token:
Authorization: Bearer <clerk_jwt_token>
Visit the Swagger UI at /docs for:
- Complete endpoint reference with request/response schemas
- Interactive "Try it out" functionality
- Example requests and responses
- Schema definitions and validation rules
Connect to ws://localhost:3000
(or your deployment URL) for real-time synchronization.
Features:
- JWT authentication required
- Real-time note and folder updates
- Rate limiting (300 msg/min per connection)
- Connection limits (20 connections per user)
Message types: auth
, ping
/pong
, join_note
/leave_note
, note_update
, note_created
/note_deleted
, folder_created
/folder_updated
/folder_deleted
The application uses the following main tables:
users
- User profiles synced from Clerkfolders
- Hierarchical folder organizationnotes
- Encrypted notes with metadatafile_attachments
- Encrypted file attachments
- Authentication: All endpoints protected with Clerk JWT verification
- Encryption Ready: Schema supports client-side encryption for notes and files
- Input Validation: Comprehensive Zod schemas for all inputs
- SQL Injection Protection: Parameterized queries via Drizzle ORM
- CORS Configuration: Configurable allowed origins
- File Size Limits: Configurable limits (default: 50MB per file, 1GB total per note)
- WebSocket Security: JWT authentication, rate limiting, and connection limits
- Real-time Authorization: Database-level ownership validation for all WebSocket operations
Variable | Description | Required | Default |
---|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Yes | - |
CLERK_SECRET_KEY |
Clerk secret key for JWT verification | Yes | - |
CORS_ORIGINS |
Comma-separated list of allowed CORS origins | Yes | - |
PORT |
Server port | No | 3000 |
NODE_ENV |
Environment (development/production) | No | development |
Caching (Optional) | |||
VALKEY_HOST |
Valkey/Redis cluster hostname | No | - |
VALKEY_PORT |
Valkey/Redis cluster port | No | 6379 |
Monitoring (Optional) | |||
SENTRY_DSN |
Sentry.io DSN for error tracking | No | - |
Rate Limiting | |||
HTTP_RATE_LIMIT_WINDOW_MS |
HTTP rate limit window in milliseconds | No | 900000 (15 min) |
HTTP_RATE_LIMIT_MAX_REQUESTS |
Max HTTP requests per window | No | 1000 |
HTTP_FILE_RATE_LIMIT_MAX |
Max file operations per window | No | 100 |
WS_RATE_LIMIT_WINDOW_MS |
WebSocket rate limit window in milliseconds | No | 60000 (1 min) |
WS_RATE_LIMIT_MAX_MESSAGES |
Max WebSocket messages per window | No | 300 |
WS_MAX_CONNECTIONS_PER_USER |
Max WebSocket connections per user | No | 20 |
WS_AUTH_TIMEOUT_MS |
WebSocket authentication timeout in milliseconds | No | 30000 (30 sec) |
CODE_EXEC_RATE_LIMIT_MAX |
Max code executions per window | No | 100 (dev), 50 (prod) |
CODE_EXEC_RATE_WINDOW_MS |
Code execution rate limit window in milliseconds | No | 900000 (15 min) |
File & Storage | |||
MAX_FILE_SIZE_MB |
Maximum size per file in MB | No | 50 |
MAX_NOTE_SIZE_MB |
Maximum total attachments per note in MB | No | 1024 (1GB) |
FREE_TIER_STORAGE_GB |
Free tier storage limit in GB | No | 1 |
FREE_TIER_NOTE_LIMIT |
Free tier note count limit | No | 100 |
Code Execution (Optional) | |||
JUDGE0_API_KEY |
Judge0 API key for code execution | No* | - |
JUDGE0_API_URL |
Judge0 API base URL | No | https://judge0-ce.p.rapidapi.com |
JUDGE0_API_HOST |
Judge0 API host header | No | judge0-ce.p.rapidapi.com |
*Required only for code execution features
The API integrates with Sentry.io for comprehensive error tracking, performance monitoring, and logging.
- Error Tracking: Automatic exception capture with full stack traces and context
- Source Maps: Production builds automatically upload source maps for readable stack traces
- Performance Monitoring: 100% transaction sampling for performance analysis
- Database Monitoring: Automatic PostgreSQL query tracking and performance analysis
- Profiling: CPU and memory profiling during active traces
- Structured Logging: Automatic capture of console.log, console.warn, and console.error
- User Context: Errors are automatically associated with authenticated users
- Environment Tracking: Separate error tracking for development and production
- Release Tracking: Errors automatically linked to code releases via GitHub Actions
Sentry is configured in the application with:
- Profiling integration enabled
- Console logging integration
- 100% trace sampling rate
- PII data collection for better debugging
- Environment-based configuration
Setup: Add your Sentry DSN to .env
:
SENTRY_DSN=https://your-key@your-org-id.ingest.us.sentry.io/your-project-id
Once configured, all errors are automatically captured and sent to Sentry with contextual information including:
- Error ID for tracking
- User ID (if authenticated)
- Request URL and method
- Stack traces
If SENTRY_DSN
is not set, the application will run normally with error tracking disabled.
Source maps are automatically generated during builds and uploaded to Sentry in production:
Development builds:
- Source maps are generated locally for debugging
- Not uploaded to Sentry (saves bandwidth and quota)
Production builds:
- Source maps are generated and uploaded to Sentry
- Requires
SENTRY_AUTH_TOKEN
environment variable - Stack traces in Sentry show your original TypeScript code, not bundled JavaScript
- Source maps are deleted after upload to keep deployments clean
Setup:
- Create a Sentry Auth Token: Sentry Settings → Auth Tokens
- Required scopes:
project:releases
,project:write
- Add to your environment:
export SENTRY_AUTH_TOKEN=your-token-here NODE_ENV=production pnpm run build
The build will automatically upload source maps when both NODE_ENV=production
and SENTRY_AUTH_TOKEN
are set.
The repository includes automated Sentry release tracking via GitHub Actions. When a new release is published:
- Automatic Release Creation: A Sentry release is created with the version tag
- Commit Association: All commits are automatically associated with the release
- Error Attribution: Errors can be traced back to specific releases
Setup Required (One-time):
To enable automated release tracking and source map uploads, add your Sentry Auth Token as a GitHub secret:
- Go to Sentry.io → Settings → Auth Tokens
- Create a new token with
project:releases
andproject:write
scopes - In GitHub, go to Settings → Secrets and variables → Actions
- Create a new secret named
SENTRY_AUTH_TOKEN
with your token value
Note: The same token is used for both release tracking and source map uploads during CI/CD builds.
The workflow automatically triggers on every release and:
- Creates a new Sentry release with the version tag
- Associates all commits since the last release
- Finalizes the release for tracking
src/
├── db/
│ ├── index.ts # Database connection
│ └── schema.ts # Database schema definitions
├── lib/
│ ├── cache.ts # Valkey/Redis cluster caching layer
│ ├── cache-keys.ts # Centralized cache key patterns and TTL values
│ ├── logger.ts # Structured logging with automatic error capture
│ └── validation.ts # Zod validation schemas
├── middleware/
│ ├── auth.ts # Authentication middleware
│ ├── rate-limit.ts # Rate limiting middleware
│ ├── security.ts # Security headers middleware
│ └── usage.ts # Storage and usage limit enforcement
├── routes/
│ ├── code.ts # Code execution routes (Judge0 proxy)
│ ├── files.ts # File attachment routes
│ ├── folders.ts # Folder management routes with caching
│ ├── notes.ts # Note management routes
│ └── users.ts # User profile routes
├── types/
│ └── index.ts # TypeScript type definitions
├── websocket/
│ ├── auth/
│ │ └── handler.ts # JWT authentication and HMAC verification
│ ├── handlers/
│ │ ├── base.ts # Base handler for resource operations
│ │ ├── notes.ts # Note sync operations
│ │ └── folders.ts # Folder sync operations
│ ├── middleware/
│ │ ├── connection-manager.ts # Connection tracking and cleanup
│ │ └── rate-limiter.ts # WebSocket rate limiting
│ ├── types.ts # WebSocket message types
│ └── index.ts # Main WebSocket server manager
└── server.ts # Application entry point
This project uses TypeScript in strict mode with comprehensive type definitions. All database operations, API inputs, and responses are fully typed.
The API can be run in Docker containers for local testing. The architecture separates the API from the database:
# 1. Start PostgreSQL container for local testing
docker run --name typelets-postgres -e POSTGRES_PASSWORD=devpassword -e POSTGRES_DB=typelets_local -p 5432:5432 -d postgres:15
# 2. Build your API container
docker build -t typelets-api .
# 3. Run API container for local testing
docker run -p 3000:3000 --env-file .env typelets-api
# Run with environment file
docker run -p 3000:3000 \
-e NODE_ENV=development \
--env-file .env \
typelets-api
This Docker setup is for local development and testing only.
Environment | API | Database | Configuration |
---|---|---|---|
Local Testing | Docker container OR npm dev | Docker PostgreSQL container | .env file |
Production | ECS container | AWS RDS PostgreSQL | ECS task definition |
This application is designed for production deployment using AWS ECS (Elastic Container Service):
- API: ECS containers running in AWS
- Database: AWS RDS PostgreSQL (not Docker containers)
- Environment Variables: ECS task definitions (not
.env
files) - Secrets: AWS Parameter Store or Secrets Manager
- Container Registry: Amazon ECR
- Local: Uses
.env
files and Docker containers for testing - Production: Uses ECS task definitions and AWS RDS for real deployment
- Never use: Local testing setup in production
For production deployment, configure the same environment variables in your ECS task definition that you use locally in .env
.
We welcome contributions from the community!
- Fork the repository on GitHub
- Clone your fork locally:
git clone https://github.com/your-username/typelets-api.git cd typelets-api
- Install dependencies:
pnpm install
- Set up environment:
cp .env.example .env
- Start PostgreSQL:
docker run --name typelets-postgres \ -e POSTGRES_PASSWORD=devpassword \ -e POSTGRES_DB=typelets_local \ -p 5432:5432 -d postgres:15
- Apply database schema:
pnpm run db:push
- Start development:
pnpm run dev
We use Conventional Commits for automatic versioning and changelog generation:
feat:
New feature (minor version bump)fix:
Bug fix (patch version bump)docs:
Documentation changesstyle:
Code style changes (formatting, etc.)refactor:
Code refactoringperf:
Performance improvementstest:
Adding or updating testschore:
Maintenance tasksci:
CI/CD changes
Examples:
feat(auth): add refresh token rotation
fix(files): resolve file upload size validation
feat(api)!: change authentication header format
- Create a feature branch:
git checkout -b feature/your-feature-name
- Make your changes and commit using conventional commits
- Run linting and tests:
pnpm run lint && pnpm run build
- Push to your fork and create a Pull Request
- Ensure all CI checks pass
- Wait for review and address any feedback
When reporting bugs, please include:
- Clear description of the issue
- Steps to reproduce
- Expected vs actual behavior
- Environment details (OS, Node version)
- Error messages or logs if applicable
DO NOT report security vulnerabilities through public GitHub issues. Please use GitHub's private vulnerability reporting feature or contact the maintainers directly.
This project is licensed under the MIT License - see the LICENSE file for details.
- Hono for the excellent web framework
- Drizzle ORM for type-safe database operations
- Clerk for authentication services