Building Smarter APIs: A Guide to Integrating CrewAI with FastAPI
- Rifx.Online
- Programming , Technology , Machine Learning
- 19 Jan, 2025
Table of Content
- 1. Introduction
- 2. Setting the Stage
- 3. Understanding CrewAI
- 4. Set up Fast API with CrewAI
- Conclusion
- Read More
1. Introduction
In the 2025, we have different ways to develop software, especially with latest AI improvements, AI came to our rooms with its various tools and applications even if it is our best IDE like Intellij IDEA or Microsoft Visual Code.
Before jumping to the Crew AI, I would like to explain what LLM is at a glance. Large Language Models (LLMs) are machine learning models than can comprehend and generate human language text. It is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks.
In a simple manner, an LLM has been fed enough to be able to recognize and interpret human language or other types of data. LLMs can automate complex and sequantial workflows and tasks. For example, you can use LLMs to build assistans that can autonomously order online products on your behalf and arrange their delivery in an app. These LLM-based assistans are called agents.
An agent is an LLM-powered assistant assigned specific tasks and tools to accomplish those tasks. In its basic form, a typical AI agent may be equipped with memory to store and manage user interactions, communicate with external data sources, and use functions to execute its tasks. Common examples of what an agent can do include the following.
Customer Support Agent
An AI agent can serve as a 24/7 customer service representative, handling FAQs, resolving customer issues, and escalating complex queries to human agents. For example, an AI agent in an e-commerce app could assist users in tracking orders, processing returns, or providing product recommendations in real time.
Personal Finance Advisor
An AI agent in a finance app can act as a virtual advisor, helping users manage budgets, analyze spending patterns, and suggest investment opportunities based on their financial goals. For instance, it might recommend specific mutual funds or ETFs after analyzing a user’s risk profile.
Healthcare Assistant
A healthcare-focused agent can assist patients in booking doctor appointments, reminding them of medication schedules, or answering basic health queries. For example, an agent could help users monitor chronic conditions by analyzing data from wearable devices and providing health insights.
Learning Companion
In education, an AI agent can act as a tutor by guiding learners through personalized study plans. It might assist users by explaining difficult concepts, suggesting additional resources, or even creating practice quizzes to reinforce learning.
Project Management Assistant
An agent integrated into a project management tool can help organize tasks, set deadlines, and automate meeting scheduling. For example, it could analyze team progress, identify bottlenecks, and suggest solutions to improve productivity.
Creative Partner
AI agents can act as a co-creator in artistic domains. For example, in content creation, an agent could assist with generating ideas, writing scripts, or creating graphic designs. It might analyze trends to suggest creative formats that resonate with specific audiences.
Smart Home Manager
In smart home ecosystems, an AI agent can automate and optimize home operations. For example, it could adjust lighting, control thermostats, and even recommend energy-saving tips by learning the homeowner’s preferences and behavior.
Cybersecurity Assistant
An AI agent can monitor network activity in real time, detect anomalies, and respond to potential threats. It might act proactively by blocking suspicious IPs or notifying administrators of critical vulnerabilities.
These examples illustrate how AI agents can adapt to various scenarios, enhancing efficiency, convenience, and user experiences across industries. Let me know if you’d like deeper insights into any of these!
Multi-agent platforms have been developed to manage such complex AI workflows and crewAI is one of them. In this article, I will develop a workflow with crewai and make it callable from outside with fastapi and as a bonus, there will be a background task mechanism to support concurrent requests.
2. Setting the Stage
Let’s create a folder (you can name it whatever you like), I’ll say an app
.
mkdir -p app
Create the virtual environment. I assume that you installed python, btw I tried with python 3.12 and had a problem with python 3.13 version with crew ai dependencies.
python -m venv .venv
Activate the virtual environment and then install the dependencies, before that create requirements.txt
file.
crewai
fastapi
uvicorn
python-dotenv
pydantic
celery
requests
Then, other goes… This will activate virtual environment and install necessary dependencies for the project.
source venv/bin/activate
pip install -r requirements.txt
3. Understanding CrewAI
crewAI is an open source multiagent orchestration framework created by João Moura. This Python-based framework leverages artificial intelligence (AI) collaboration by orchestrating role-playing autonomous AI agents that work together as a cohesive assembly or “crew” to complete tasks. The goal of crewAI is to provide a strong framework to automate multiagent workflows.
I’ll not dive deep into how crewai works, becase this article was intended to create fast api backed by crew ai. For more information about how crewai works, you can this documentation.
The image provides a conceptual framework for Crew, focusing on the role and collaboration of AI agents in accomplishing tasks to achieve specific outcomes. Here’s a detailed breakdown of the key components:
1. AI Agents
- The framework shows multiple AI agents, which are depicted as black boxes at the top.
- These agents work collaboratively and can delegate tasks or ask questions to one another.
2. Tools
- A specific part highlights tools, which agents can use to perform their tasks. Tools are external utilities or functions that augment the agents’ abilities.
3. Process
- Processes define how the AI agents will collaborate.
- This includes:
- How tasks are assigned.
- How agents interact with one another.
- How agents execute their work.
4. Tasks
- Tasks are depicted at the bottom and represent individual actions or responsibilities that agents need to handle.
- Tasks can:
- Override agent tools by specifying what tools to use.
- Assign specific tasks to particular agents.
4. Set up Fast API with CrewAI
Before jumping to the demo, I’d like to assure you that I prepared a repository for you, feel free to consume it. You can also open the issue that I’d like to solve it.
- In this repository I created
analyzer.py
python script and added my endpoints in there where you’ll be able to make a call to kickof crewai as background task.
from fastapi import APIRouter, HTTPException, BackgroundTasks
from app.models.models import TopicRequest, TaskResponse
from app.services.services import BotService
router = APIRouter()
@router.post("/analyze", response_model=TaskResponse)
async def analyze_topic(request: TopicRequest, background_tasks: BackgroundTasks):
task_id = BotService.create_task(request.topic)
background_tasks.add_task(BotService.process_task, task_id, request.topic)
return BotService.get_task_status(task_id)
@router.get("/task/{task_id}", response_model=TaskResponse)
async def get_task_status(task_id: str):
task = BotService.get_task_status(task_id)
if not task:
raise HTTPException(status_code=404, detail="Task not found")
return task
This code snippet defines a FastAPI router with endpoints for creating and managing background tasks related to topic analysis.
- The
BotService
class inservices.py
manages the lifecycle of tasks for analyzing topics usingUrlInsightBot
, including task creation, asynchronous processing, status updates, and logging.
import asyncio
import uuid
import logging
from typing import Dict
from app.models.models import TaskStatus, TaskResponse
from app.crew.crew import UrlInsightBot
## Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class BotService:
_tasks: Dict[str, TaskResponse] = {}
@classmethod
def create_task(cls, topic: str) -> str:
task_id = str(uuid.uuid4())
cls._tasks[task_id] = TaskResponse(
task_id=task_id,
status=TaskStatus.PENDING
)
logger.info(f"Task {task_id} created with status PENDING for topic: {topic}")
return task_id
@classmethod
async def process_task(cls, task_id: str, topic: str):
try:
cls._tasks[task_id].status = TaskStatus.PROCESSING
logger.info(f"Task {task_id} status changed to PROCESSING")
bot = UrlInsightBot()
result = await bot.crew().kickoff_async(inputs={'topic': topic})
cls._tasks[task_id].status = TaskStatus.COMPLETED
cls._tasks[task_id].result = result
logger.info(f"Task {task_id} completed successfully with status {cls._tasks[task_id].status}")
except Exception as e:
cls._tasks[task_id].status = TaskStatus.FAILED
cls._tasks[task_id].error = str(e)
logger.error(f"Task {task_id} failed with error: {e}")
@classmethod
async def process_task_sleep(cls, task_id: str, topic: str):
try:
cls._tasks[task_id].status = TaskStatus.PROCESSING
logger.info(f"Task {task_id} status changed to PROCESSING")
# Simulate processing
await asyncio.sleep(5) # Simulate a long-running task
cls._tasks[task_id].status = TaskStatus.COMPLETED
cls._tasks[task_id].result = f"Processed topic: {topic}"
logger.info(f"Task {task_id} completed successfully with status {cls._tasks[task_id].status}")
except Exception as e:
cls._tasks[task_id].status = TaskStatus.FAILED
cls._tasks[task_id].error = str(e)
@classmethod
def get_task_status(cls, task_id: str) -> TaskResponse:
return cls._tasks.get(task_id)
- This code defines a
UrlInsightBot crew
class using theCrewAI
framework, configuring it with YAML files for agents and tasks, and setting up two agents, researcher and reporting_analyst, with their respective configurations.
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
import logging
## If you want to run a snippet of code before or after the crew starts,
## you can use the @before_kickoff and @after_kickoff decorators
## https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
## Suppress logs from LiteLLM and httpx
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
@CrewBase
class UrlInsightBot():
"""UrlInsightBot crew"""
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
# If you would like to add tools to your agents, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
)
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
)
@crew
def crew(self) -> Crew:
"""Creates the UrlInsightBot crew"""
# To learn how to add knowledge sources to your crew, check out the documentation:
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
)
After you cloned the repository and set up the project, you can run the project with below command and make a call with curl
.
cd ./crewai/url_insight_api
uvicorn app.main:app --reload --port 8000 --log-config config/log_config.yaml
Test the server
## Start analysis
curl -X POST http://localhost:8000/api/v1/analyze \
-H "Content-Type: application/json" \
-d '{"topic": "AI LLMs"}'
## Check task status (replace <task_id> with actual ID from previous response)
curl http://localhost:8000/api/v1/task/<task_id>
You can make concurrent requests and you’ll see that it supports concurrency.
Conclusion
Integrating CrewAI with FastAPI showcases the power of combining AI-driven agent collaboration with a robust Python web framework to create efficient, scalable, and intelligent applications. By leveraging CrewAI’s capabilities, developers can seamlessly manage processes, delegate tasks to AI agents, and achieve optimized outcomes. The use of FastAPI ensures that the system is not only fast and reliable but also highly extensible, making it suitable for real-world applications like automating operations, providing co-pilot assistance, or streamlining complex workflows.
As AI agents continue to evolve, frameworks like CrewAI will become essential tools in building innovative systems that mimic human collaboration and decision-making. With the demonstrated approach, developers now have a blueprint to harness this potential, delivering smarter applications that drive productivity and efficiency to new heights.
🔥 Liked this article? Don’t forget to clap, follow, and share it with your friends! Your support helps us create more content like this. If you want to read more articles like this, consider subscribing here.
🌟 Support us on Ko-Fi: If you found this article helpful, consider buying us a coffee on Ko-Fi. Your support means the world to us and helps keep the content coming!
Read More
If you enjoyed this article, you might find these equally insightful:
You’ll be hero about database webhooks when you read this.
2. Unveiling Spring Security’s One-Time Token: The Ultimate Solution for Stateless Authentication
Explore the Spring Security One Time Tokens and its power and difference with traditional token approach.
3. Monorepo Setup Hacks That Will Change the Way You Develop Forever! 💥
Transform Your Development Workflow with These Easy Monorepo Hacks — You Won’t Believe How Much Time You’ll Save!