How to build AI agents using LangChain Framework
LangChain is a software platform for building reliable Agents and AI applications powered by LLM’s. It is used for agent engineering. It can integrate with different LLM models such as OpenAI, Anthropic, Google and more.
LangChain provides a pre-built agent architecture and model integrations that helps to get started quickly with Agentic AI applications and seamlessly incorporate LLM’s.
Features
- Visibility at each step that agent is performing and customization of agents as per your requirements
- Iterative lifecycle with build, test, deploy, learn and repeat with workflows
- Deploy with a scalable infarastructure for agents designed to run long running workloads
- No vendor lock-in with ease of integration with different models, tools and databases without rewriting code
Agent Engineering Stack
- OpenSource Frameworks: LangChain and LangGraph
- Agent Engineering Platform: LangSmith
- LangChain: It is a OpenSource frameworks that helps to ship Agentic AI application quickly with less code using a pre-built agent architecture and model integrations.
- LangGraph: It helps to build custom agent workflows providing you with full control of low level primitives. It is a low level agent orchestration framework and runtime for building advanced and heavily customized agent workflows.
Agentic AI Usecases
Some of the Agentic AI usecases are as below.
- Copilots: They can be natively integrated into your application and provide assistance in code generation or code review enhancing your productivity.
- Enterprise GPT: Provides access to information and tools along with organizations internal database to enhance productivity of the employees.
- Customer Support: These Agentic AI applications help in assisting the end users with any problems or operations support and guide them appropriately improving speed and efficiency of support team.
- Research: Help is analyzing and synthesizing the datasets, research documents and provide useful insights and summarize the data for the end users.
Now that we got some details about the LangChain frameworks and other engineering stack frameworks, let’s try to build our first agent using Python and LangChain.
Procedure
Step1: Install OpenAI integration package
admin@fedser:langchain$ pip install -U langchain
admin@fedser:langchain$ pip install -U langchain-openai
Successfully installed annotated-types-0.7.0 anyio-4.12.0 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 jiter-0.12.0 langchain-core-1.1.0 langchain-openai-1.1.0 langsmith-0.4.53 openai-2.8.1 orjson-3.11.4 packaging-25.0 pydantic-2.12.5 pydantic-core-2.41.5 requests-toolbelt-1.0.0 sniffio-1.3.1 tiktoken-0.12.0 tqdm-4.67.1 typing-extensions-4.15.0 typing-inspection-0.4.2 uuid-utils-0.12.0 zstandard-0.25.0 langchain-1.1.0 langgraph-1.0.4 langgraph-checkpoint-3.0.1 langgraph-prebuilt-1.0.5 langgraph-sdk-0.2.12 ormsgpack-1.12.0 xxhash-3.6.0
Step2: Create OpenAI account
For this demo we will be using the OpenAI LLM model provider. You need to register and generate an API key for usage using “Developer quickstart“.
Step3: Setup OpenAI API key as environment variable
Once you have your OpenAI API key, you need to set it as an environment variable for later use. Here i am setting in the .bashrc as an environment variable which would be available throughout my login session.
admin@fedser:~$ cat ~/.bashrc | grep -i openai
## OpenAI key
export OPENAI_API_KEY="your_openai_api_key"
Step3: Create Assistant Agent
Here we will build a very basic agent that will take a question as input and provide an answer by leveraging the Open AI LLM model.
The model that we using for this demo is “gpt-3.5-turbo” as its cost effective for learning purpose.
- GPT-3.5-Turbo LLM Model: GPT-3.5-Turbo is a faster, cheaper, and more conversational version of OpenAI’s GPT-3.5 large language model (LLM), optimized for chat interfaces using the Chat Completions API. It powers the free version of ChatGPT and is widely used for applications, content generation, and customer support.
- System Prompt: A system prompt is a set of instructions that guides an LLM’s behavior, defining its role, persona, and the rules it should follow.
- LLM Tools: An LLM tool enables a Large Language Model to interact with external functions, APIs, or systems, extending its capabilities beyond text generation to perform real-world actions like fetching live data, booking events, or sending messages. Instead of just answering questions, LLMs can use developer-defined “tools” to execute specific tasks
admin@fedser:langchain$ cat weather_agent.py
from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="gpt-3.5-turbo",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
print(agent)
# Run the agent
response = agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
print(response)
Step4: Run Assistant Agent
Now let’s run the assistant agent as shown below and see the response generated by the agent based on the user query.
admin@fedser:langchain$ python weather_agent.py
<langgraph.graph.state.CompiledStateGraph object at 0x7fb3da2e4c50>
{'messages': [HumanMessage(content='what is the weather in sf', additional_kwargs={}, response_metadata={}, id='2fcd6b3d-2d89-413b-bdf3-3dfdf2b0e969'), AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 57, 'total_tokens': 72, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-CiZojTohFAlV4fqJ8W7tQ3kIZtJTs', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='lc_run--cd0739d9-97f5-43bd-a0ae-0e9464faaf12-0', tool_calls=[{'name': 'get_weather', 'args': {'city': 'San Francisco'}, 'id': 'call_4NUddeYiFqCySWajfd3SQD6f', 'type': 'tool_call'}], usage_metadata={'input_tokens': 57, 'output_tokens': 15, 'total_tokens': 72, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}), ToolMessage(content="It's always sunny in San Francisco!", name='get_weather', id='758fd26c-adf5-4148-bcd6-93f01f1cf089', tool_call_id='call_4NUddeYiFqCySWajfd3SQD6f'), AIMessage(content='The weather in San Francisco is always sunny!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 88, 'total_tokens': 98, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-CiZokbLKx4Ey3xkcavTHmIssISP5V', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='lc_run--e12eff39-a5ca-492b-ad61-69ddfb4023db-0', usage_metadata={'input_tokens': 88, 'output_tokens': 10, 'total_tokens': 98, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.