-
Couldn't load subscription status.
- Fork 0
0007: Unified Embedding API #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,72 @@ | ||
| --- | ||
| GEP: 0007 | ||
| Title: Unified Embedding API | ||
| Discussion: | ||
| Implementation: | ||
| --- | ||
|
|
||
| # Unified Embedding API | ||
|
|
||
| ## Abstract | ||
|
|
||
| This GEP discusses the unified embeddings api and its interactions with embedding model providers. | ||
|
|
||
| ## Motivation | ||
|
|
||
| Many LLM applications have a semantic search feature within their architecture. To support these applications on Glide, a unified embedding api is necessary. | ||
| This will allow applications to embed chat requests as part of a retrieval augmented generation (RAG) application flow. | ||
|
|
||
| ### Requirements | ||
|
|
||
| - R1: Handles all provider specific logic | ||
| - R2: Easily maintained | ||
| - R3: API schemas must unify common request params (e.g. dimensions) | ||
| - R4: API routes must unify common embedding endpoints/API | ||
|
|
||
| ## Design | ||
|
|
||
| ```yaml | ||
| routes: | ||
| chat: /v1/language/{pool-id}/chat/ | ||
| transcribers: /v1/speech/transcribers/{pool-id}/ | ||
| speech-synthesizer: /v1/speech/synthesizers/{pool-id}/ | ||
| multi-modal: /v1/multi/{pool-id}/multimodal/ | ||
| embedding: /v1/embeddings/{pool-id}/embed/ | ||
| ``` | ||
|
|
||
| #### User Request Schema for Embedding Route | ||
|
|
||
| ```yaml | ||
| { | ||
| "message": "Where was it played?", | ||
| "dimensions": 1536 | ||
| } | ||
| ``` | ||
|
|
||
| #### Response Schema for Embedding Route | ||
| ```yaml | ||
| { | ||
| "provider": "cohere", | ||
| "model": "embed-multilingual-v3.0", | ||
| "provider_response": { | ||
| "embedding": [ | ||
| 0.0023064255, | ||
| -0.009327292, | ||
| .... | ||
| -0.0028842222, | ||
| ], | ||
| "token_count": { | ||
| "prompt_tokens": 9, | ||
| "total_tokens": 9 | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Alternatives Considered | ||
|
|
||
| [TBU, what other solutions were considered and why they were rejected] | ||
|
|
||
| ## Future Work | ||
|
|
||
| - Could we abstract away the entire RAG architecture? A single endpoint that takes a chat message -> embeds -> text semantic search -> LLM -> response | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This actually sounds like an idea for another service that would use Glide to talk to LLMs while providing RAG related addition to Glide's workflows. @mkrueger12 what do you think? What would be MVP for such a service? |
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have recently changed
token_counttotoken_usage. I feel like this is a bit more explicit.