Editing
Embeddings and Vector Databases
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Embeddings and vector databases are the foundational infrastructure of modern semantic AI applications. An embedding is a dense numerical vector that represents the meaning of text, images, audio, or other data in a high-dimensional space, where semantically similar items are geometrically close together. Vector databases store these embeddings and enable lightning-fast similarity search at scale. Together, they power semantic search, RAG systems, recommendation engines, duplicate detection, and anomaly detection. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Embedding''' β A dense vector of real numbers representing the meaning of a piece of data. Similar items have vectors that are close together in the embedding space. * '''Embedding model''' β A neural network trained to produce embeddings. Examples: text-embedding-3-small (OpenAI), BGE-M3 (BAAI), all-MiniLM-L6-v2 (Sentence Transformers). * '''Dimensionality''' β The number of values in an embedding vector. Common sizes: 384, 768, 1536, 3072. Higher dimensions can capture more nuance but require more storage and compute. * '''Semantic similarity''' β The degree to which two items mean the same thing, encoded as the geometric distance between their embeddings. * '''Cosine similarity''' β The most common similarity metric for embeddings; measures the angle between two vectors. Values range from -1 (opposite) to 1 (identical). * '''Dot product''' β An alternative similarity metric; equivalent to cosine similarity when vectors are normalized. * '''L2 distance (Euclidean)''' β The straight-line distance between two vectors; used in some retrieval scenarios. * '''Vector database''' β A database optimized for storing embedding vectors and performing fast approximate nearest neighbor (ANN) search. Examples: Pinecone, Weaviate, Chroma, Qdrant, Milvus, pgvector. * '''ANN (Approximate Nearest Neighbor)''' β An algorithm that finds vectors approximately close to a query vector very quickly (sacrificing exact precision for speed). * '''HNSW (Hierarchical Navigable Small World)''' β The most widely used ANN index structure, offering excellent speed-recall trade-offs. * '''Metadata filtering''' β Restricting vector search results to items matching certain criteria (e.g., only articles from 2024, only products in category "electronics"). * '''Biencoder''' β A model that encodes queries and documents independently into embedding space, enabling fast retrieval (e.g., Sentence-BERT). * '''Cross-encoder''' β A model that takes a query-document pair as input and outputs a relevance score; more accurate than biencoder but much slower (used for reranking). * '''Chunking''' β Splitting large documents into smaller pieces before embedding, since embedding models have token limits. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == The magic of embeddings is that they transform the hard problem of semantic similarity into simple geometric distance. After training on massive amounts of text data (or image-text pairs), embedding models learn to place words, sentences, and documents that mean similar things close together in a high-dimensional space. The classic demonstration: In a good word embedding space: * "king" - "man" + "woman" β "queen" * "Paris" - "France" + "Italy" β "Rome" This isn't hardcoded β it emerges from the statistical patterns of how words co-occur in language. '''Why not use keyword search?''' Keywords match exact strings. Semantic search understands meaning. A query for "cardiac event" will find documents about "heart attack" via embeddings; keyword search would miss this unless the exact phrase appears. '''How vector databases work''': Storing millions of embedding vectors and doing exact search (computing cosine similarity against every stored vector) would be too slow. ANN algorithms solve this by building smart index structures. HNSW (Hierarchical Navigable Small World) builds a layered graph where each layer is a sparser approximation of the dense lower layer β like a highway system where you first navigate between cities (coarse layer) then between neighborhoods (fine layer). This achieves sub-millisecond query times on millions of vectors. '''Hybrid search''' combines vector (semantic) search with BM25 keyword search, using a Reciprocal Rank Fusion (RRF) algorithm to merge results. This consistently outperforms either approach alone, because different query types benefit from different retrieval mechanisms. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Generating and storing embeddings with Sentence Transformers + Chroma:''' <syntaxhighlight lang="python"> from sentence_transformers import SentenceTransformer import chromadb import numpy as np # Load embedding model model = SentenceTransformer("BAAI/bge-m3") # Sample documents docs = [ "Neural networks are the foundation of deep learning.", "The heart pumps blood through the circulatory system.", "Python is a popular programming language for data science.", "Transformers use self-attention mechanisms for NLP tasks.", "The mitochondria are the powerhouse of the cell.", ] # Generate embeddings embeddings = model.encode(docs, normalize_embeddings=True) print(f"Embedding shape: {embeddings.shape}") # (5, 1024) # Store in vector database client = chromadb.Client() collection = client.create_collection("knowledge_base") collection.add( documents=docs, embeddings=embeddings.tolist(), ids=[f"doc_{i}" for i in range(len(docs))] ) # Semantic search query = "How do attention mechanisms work?" query_embedding = model.encode([query], normalize_embeddings=True).tolist() results = collection.query( query_embeddings=query_embedding, n_results=2 ) print(results["documents"]) # [["Transformers use self-attention mechanisms for NLP tasks.", # "Neural networks are the foundation of deep learning."]] </syntaxhighlight> ; Vector database selection guide : '''Local/development''' β Chroma (in-memory, file-backed), FAISS (library) : '''Self-hosted production''' β Qdrant (Rust, great performance), Weaviate (rich features), Milvus (scale) : '''Managed cloud''' β Pinecone (simplest API), Weaviate Cloud, Zilliz Cloud : '''Existing PostgreSQL stack''' β pgvector extension (good for <10M vectors) : '''Multimodal (text + image)''' β Weaviate, Qdrant (both support multiple vector types) </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Embedding Model Comparison ! Model !! Dimensions !! Speed !! Quality !! Cost |- | text-embedding-3-small (OpenAI) || 1536 || Fast (API) || Very high || Paid per token |- | text-embedding-3-large (OpenAI) || 3072 || Fast (API) || Highest || More expensive |- | BAAI/BGE-M3 || 1024 || Moderate (local) || Very high || Free (self-hosted) |- | all-MiniLM-L6-v2 || 384 || Very fast (local) || Good || Free (self-hosted) |- | E5-mistral-7b || 4096 || Slow (large model) || Excellent || Free (GPU needed) |} '''Failure modes and pitfalls:''' * '''Embedding model-retrieval mismatch''' β The embedding model used to index documents must be identical to the one used to embed queries. Using different models produces nonsensical results. * '''Chunking artifacts''' β Important context split across chunks leads to poor retrievals. If an answer spans two chunks, neither may score high enough to be retrieved. * '''Embedding stale data''' β If documents are updated but re-embedding is not triggered, the index serves outdated information. Implement change detection and incremental re-indexing. * '''Dimensionality curse''' β In very high dimensions, all vectors tend to become equidistant from each other, degrading nearest-neighbor search quality. Use models with well-calibrated dimensionalities. * '''Semantic gap''' β Embeddings capture distributional semantics but may miss precise numerical facts, dates, or codes. Combine with structured filters or keyword search. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Expert practitioners evaluate embedding systems at multiple levels: '''Embedding quality metrics''': * '''MTEB (Massive Text Embedding Benchmark)''': The standard benchmark suite for text embeddings, covering retrieval, classification, clustering, semantic similarity, and more. * '''BEIR benchmark''': Zero-shot retrieval across diverse domains β the true test of embedding generalization. '''System-level retrieval metrics''': * Recall@k and Precision@k on held-out query-document pairs * Mean Reciprocal Rank (MRR) for ranking quality * Query latency at p50/p95/p99 percentiles '''Operational metrics''': * Index build time and storage size (cost) * Query throughput (QPS) at target latency SLAs * Index freshness lag (time between document update and searchability) Expert practitioners also evaluate embeddings on their specific domain β a general embedding model trained on web text may underperform a fine-tuned domain-specific model on medical, legal, or code retrieval tasks. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Designing a scalable embedding and vector search infrastructure: '''1. Embedding pipeline''' <syntaxhighlight lang="text"> Data source (docs, products, articles) β [Change detection: hash-based or timestamp comparison] β [Chunking: semantic or recursive, 512-1024 tokens] β [Embedding generation: batched, GPU-accelerated] β [Vector store upsert: id, vector, metadata, document text] β [BM25 index update for hybrid search] </syntaxhighlight> '''2. Query pipeline''' <syntaxhighlight lang="text"> User query β [Query preprocessing: lowercase, strip special chars] β [Parallel retrieval: βββ Dense (ANN): top-50 by cosine similarity βββ Sparse (BM25): top-50 by keyword relevance] β [Reciprocal Rank Fusion: merge and deduplicate] β [Cross-encoder reranking: top-10 β top-5] β Top-k results with metadata and scores </syntaxhighlight> '''3. Production considerations''' * Pre-filter by metadata before ANN search to reduce search space * Cache frequent query embeddings (TTL-based) * Use asynchronous indexing to avoid blocking on document ingestion * Set up monitoring: index size growth, query latency, empty result rates [[Category:Artificial Intelligence]] [[Category:Machine Learning]] [[Category:Vector Databases]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information