CrewAI Integration
Use Metaphori as a custom LLM provider in your CrewAI agents for compressed context and reduced costs.
Installation
pip install crewai langchain-openai
Configuration
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
# Configure Metaphori as your LLM
metaphori_llm = ChatOpenAI(
model="gpt-4o", # Use any OpenAI model
openai_api_key="your-metaphori-api-key",
openai_api_base="https://ai.metaphori.dev/v1",
model_kwargs={
"extra_params": {
"mid": "your-compression-id" # Optional: use specific compression
}
}
)
# Create agents with Metaphori
researcher = Agent(
role='Senior Researcher',
goal='Conduct thorough research on {topic}',
backstory='You are an expert researcher with compressed context access',
llm=metaphori_llm,
verbose=True
)
# Create tasks and crew as usual
research_task = Task(
description='Research the latest developments in {topic}',
agent=researcher
)
crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
# Execute with compressed context
result = crew.kickoff(inputs={'topic': 'quantum computing'})Using Compressions
To use a specific compression with your CrewAI agents:
- Create a compression using the CLI:
metaphori create context.md - Get the compression ID from the response
- Pass the ID in the model_kwargs as shown above
API Endpoint Details
Base URL: https://ai.metaphori.dev/v1
Authentication: Bearer Token (your Metaphori API key)
Compression ID: Pass as mid parameter or in model name
Compatible Models: All OpenAI models (gpt-4o, gpt-4, gpt-3.5-turbo, etc.)