Skip to content

AI

VentureKit provides AI capabilities through @venturekit-pro/ai with embeddings, vector stores, RAG pipelines, and agents with tool use.

Terminal window
npm install @venturekit-pro/ai@dev
# Install provider(s) you need
npm install openai # OpenAI embeddings + agents
npm install @aws-sdk/client-bedrock-runtime # AWS Bedrock embeddings
npm install @pinecone-database/pinecone # Pinecone vector store

Generate vector embeddings from text:

import { createEmbedder } from '@venturekit-pro/ai';
const embedder = createEmbedder({
provider: 'openai',
model: 'text-embedding-3-small',
apiKey: process.env.OPENAI_API_KEY,
});
const vector = await embedder.embed('What is VentureKit?');
const vectors = await embedder.embedBatch(['Question 1', 'Question 2']);
ProviderModelsSetup
OpenAItext-embedding-3-small, text-embedding-3-largeAPI key
AWS BedrockTitan, CohereAWS credentials

Store and query vector embeddings:

import { createVectorStore } from '@venturekit-pro/ai';
const store = createVectorStore({
provider: 'pinecone',
indexName: 'my-index',
apiKey: process.env.PINECONE_API_KEY,
});
// Upsert vectors
await store.upsert([
{ id: 'doc-1', vector, metadata: { title: 'Getting Started' } },
]);
// Query similar vectors
const results = await store.query(queryVector, { topK: 5 });
ProviderDescription
PineconeManaged vector database
pgvectorPostgreSQL extension (use with @venturekit/data)
In-memoryDevelopment and testing

Build retrieval-augmented generation pipelines:

import { createRagPipeline, chunkText } from '@venturekit-pro/ai';
const rag = createRagPipeline({
embedder,
vectorStore: store,
chunkSize: 500,
chunkOverlap: 50,
});
// Chunk a document
const chunks = chunkText(documentText, { size: 500, overlap: 50 });
// Ingest into the pipeline
await rag.ingest(chunks);
const context = await rag.retrieve('How do I deploy?', { topK: 3 });
// Returns the most relevant chunks for your query

Create AI agents with tool use:

import { createAgent, defineTool } from '@venturekit-pro/ai';
const searchTool = defineTool({
name: 'search_docs',
description: 'Search the documentation',
parameters: {
query: { type: 'string', description: 'Search query' },
},
handler: async ({ query }) => {
const results = await rag.retrieve(query, { topK: 3 });
return results.map(r => r.text).join('\n\n');
},
});
const agent = createAgent({
model: 'gpt-4',
apiKey: process.env.OPENAI_API_KEY,
tools: [searchTool],
systemPrompt: 'You are a helpful assistant that answers questions about VentureKit.',
});
const response = await agent.run('How do I set up authentication?');
import { handler } from '@venturekit/runtime';
import { createEmbedder, createVectorStore, createRagPipeline } from '@venturekit-pro/ai';
export const main = handler(async (body, ctx, logger) => {
const rag = createRagPipeline({ embedder, vectorStore: store });
const context = await rag.retrieve(body.question, { topK: 3 });
logger.info('RAG retrieval', { question: body.question, resultCount: context.length });
return { answer: context };
}, { scopes: ['api.read'] });