AI
VentureKit provides AI capabilities through @venturekit-pro/ai with embeddings, vector stores, RAG pipelines, and agents with tool use.
npm install @venturekit-pro/ai@dev
# Install provider(s) you neednpm install openai # OpenAI embeddings + agentsnpm install @aws-sdk/client-bedrock-runtime # AWS Bedrock embeddingsnpm install @pinecone-database/pinecone # Pinecone vector storeEmbeddings
Section titled “Embeddings”Generate vector embeddings from text:
import { createEmbedder } from '@venturekit-pro/ai';
const embedder = createEmbedder({ provider: 'openai', model: 'text-embedding-3-small', apiKey: process.env.OPENAI_API_KEY,});
const vector = await embedder.embed('What is VentureKit?');const vectors = await embedder.embedBatch(['Question 1', 'Question 2']);Providers
Section titled “Providers”| Provider | Models | Setup |
|---|---|---|
| OpenAI | text-embedding-3-small, text-embedding-3-large | API key |
| AWS Bedrock | Titan, Cohere | AWS credentials |
Vector Stores
Section titled “Vector Stores”Store and query vector embeddings:
import { createVectorStore } from '@venturekit-pro/ai';
const store = createVectorStore({ provider: 'pinecone', indexName: 'my-index', apiKey: process.env.PINECONE_API_KEY,});
// Upsert vectorsawait store.upsert([ { id: 'doc-1', vector, metadata: { title: 'Getting Started' } },]);
// Query similar vectorsconst results = await store.query(queryVector, { topK: 5 });Providers
Section titled “Providers”| Provider | Description |
|---|---|
| Pinecone | Managed vector database |
| pgvector | PostgreSQL extension (use with @venturekit/data) |
| In-memory | Development and testing |
RAG Pipeline
Section titled “RAG Pipeline”Build retrieval-augmented generation pipelines:
import { createRagPipeline, chunkText } from '@venturekit-pro/ai';
const rag = createRagPipeline({ embedder, vectorStore: store, chunkSize: 500, chunkOverlap: 50,});Ingesting Documents
Section titled “Ingesting Documents”// Chunk a documentconst chunks = chunkText(documentText, { size: 500, overlap: 50 });
// Ingest into the pipelineawait rag.ingest(chunks);Retrieving Context
Section titled “Retrieving Context”const context = await rag.retrieve('How do I deploy?', { topK: 3 });// Returns the most relevant chunks for your queryAgents
Section titled “Agents”Create AI agents with tool use:
import { createAgent, defineTool } from '@venturekit-pro/ai';
const searchTool = defineTool({ name: 'search_docs', description: 'Search the documentation', parameters: { query: { type: 'string', description: 'Search query' }, }, handler: async ({ query }) => { const results = await rag.retrieve(query, { topK: 3 }); return results.map(r => r.text).join('\n\n'); },});
const agent = createAgent({ model: 'gpt-4', apiKey: process.env.OPENAI_API_KEY, tools: [searchTool], systemPrompt: 'You are a helpful assistant that answers questions about VentureKit.',});
const response = await agent.run('How do I set up authentication?');Using AI in Handlers
Section titled “Using AI in Handlers”import { handler } from '@venturekit/runtime';import { createEmbedder, createVectorStore, createRagPipeline } from '@venturekit-pro/ai';
export const main = handler(async (body, ctx, logger) => { const rag = createRagPipeline({ embedder, vectorStore: store }); const context = await rag.retrieve(body.question, { topK: 3 });
logger.info('RAG retrieval', { question: body.question, resultCount: context.length }); return { answer: context };}, { scopes: ['api.read'] });