@venturekit-pro/ai
Installation
Section titled “Installation”npm install @venturekit-pro/ai@dev
# Install providers you neednpm install openai # OpenAInpm install @aws-sdk/client-bedrock-runtime # AWS Bedrocknpm install @pinecone-database/pinecone # PineconeWhat It Provides
Section titled “What It Provides”Embeddings
Section titled “Embeddings”import { createEmbedder, createEmbeddingConfig, DEFAULT_EMBEDDING_CONFIG } from '@venturekit-pro/ai';
const embedder = createEmbedder({ provider: 'openai', model: 'text-embedding-3-small', apiKey });const vector = await embedder.embed('Hello world');Vector Stores
Section titled “Vector Stores”import { createVectorStore, createVectorStoreConfig, DEFAULT_VECTOR_STORE_CONFIG } from '@venturekit-pro/ai';
const store = createVectorStore({ provider: 'pinecone', indexName: 'my-index', apiKey });await store.upsert([{ id: 'doc-1', vector, metadata: {} }]);const results = await store.query(queryVector, { topK: 5 });RAG Pipeline
Section titled “RAG Pipeline”import { createRagPipeline, createRagConfig, DEFAULT_RAG_CONFIG, chunkText } from '@venturekit-pro/ai';
const rag = createRagPipeline({ embedder, vectorStore: store, chunkSize: 500 });const chunks = chunkText(text, { size: 500, overlap: 50 });await rag.ingest(chunks);const context = await rag.retrieve('query', { topK: 3 });Agents
Section titled “Agents”import { createAgent, createAgentConfig, DEFAULT_AGENT_CONFIG, defineTool } from '@venturekit-pro/ai';
const tool = defineTool({ name: 'search', description: '...', parameters: {}, handler: async () => {} });const agent = createAgent({ model: 'gpt-4', apiKey, tools: [tool], systemPrompt: '...' });const response = await agent.run('question');Dependencies
Section titled “Dependencies”@venturekit/core— requiredopenai— optional peer (embeddings, agents)@aws-sdk/client-bedrock-runtime— optional peer (Bedrock embeddings)@pinecone-database/pinecone— optional peer (Pinecone vector store)
Related
Section titled “Related”- AI Guide — setup walkthrough
- API Reference — full type documentation