In this tutorial, we learn how to utilize the power in Dappier AI, a package of real -time search and recommendation tools, to improve our conversation applications. By combining Dappier’s groundbreaking real -timber search trool with its AIRECommendationTool, we can ask the latest information from the entire Internet and Surface Personalized Article Suggestions from Custom Data Models. We will guide you step by step by setting our Google Colab environment, installing dependencies, securely loading API keys and initializing each Dappier module. We then integrate these tools with an Openai Chat Model (eg GPT-3.5 turbo), construct a composed fast chain and perform end-to-turn queries, all within nine brief notebook cells. Whether we need up-to-minute news or AI-driven content scoring, this tutorial provides a flexible framework for building intelligent, data-driven chat experiences.
!pip install -qU langchain-dappier langchain langchain-openai langchain-community langchain-core openai
We bootstrap our Colab environment by installing the Core Langchain libraries, both the DAPPIER extensions and the Community Integrations, along with the official Openai client. With these packages in place, we will have trouble-free access to Dappier’s real-time search and recommendation tools, the latest Langchain operating times and Openai API, all in an environment.
import os
from getpass import getpass
os.environ["DAPPIER_API_KEY"] = getpass("Enter our Dappier API key: ")
os.environ["OPENAI_API_KEY"] = getpass("Enter our OpenAI API key: ")
We probably capture our Dappies and Openai API credentials at Runtime, thereby avoiding the hard-coding of sensitive keys in our notebook. By using GetPass, the requests ensure that our input remains hidden and to set them as environmental variables make them available to all subsequent cells without exposing them in logs.
from langchain_dappier import DappierRealTimeSearchTool
search_tool = DappierRealTimeSearchTool()
print("Real-time search tool ready:", search_tool)
We import Dappier’s real -time search module and create an instance of DappierrealTimSearchTool, enabling our notebook to execute live web requests. The printing declaration confirms that the tool is initialized successfully and is ready to handle search requests.
from langchain_dappier import DappierAIRecommendationTool
recommendation_tool = DappierAIRecommendationTool(
data_model_id="dm_01j0pb465keqmatq9k83dthx34",
similarity_top_k=3,
ref="sportsnaut.com",
num_articles_ref=2,
search_algorithm="most_recent",
)
print("Recommendation tool ready:", recommendation_tool)
We created Dappier’s AI-powered recommendation engine by specifying our custom data model, the number of similar articles to download and the source domain of context. The DappieraReaCendationTool incidence will now use the “Most_recent” algorithm to pull the top-K-relevant articles (here, two) from our specified reference, ready for query-driven content proposal.
from langchain.chat_models import init_chat_model
llm = init_chat_model(
model="gpt-3.5-turbo",
model_provider="openai",
temperature=0,
)
llm_with_tools = llm.bind_tools([search_tool])
print("✅ llm_with_tools ready")
We create an Openai Chat Model instance using GPT-3.5 turbo with a temperature of 0 to ensure uniform answers and then tie the previously initialized search tool so that LLM can invoke real-time searches. The final print declaration confirms that our LLM is ready to call Dappier’s tools within our conversation flow.
import datetime
from langchain_core.prompts import ChatPromptTemplate
today = datetime.datetime.today().strftime("%Y-%m-%d")
prompt = ChatPromptTemplate([
("system", f"we are a helpful assistant. Today is {today}."),
("human", "{user_input}"),
("placeholder", "{messages}"),
])
llm_chain = prompt | llm_with_tools
print("✅ llm_chain built")
We construct the conversation “chain” by first building a chat promptemplate that injects the current date into a system prompt and defines slots for user input and prior messages. By touching the template (|) into our LLM_WITH_TOOLS, we create an LLM_chain that automatically formats, calls LLM (with real -time search ability) and handles answers in a seamless workflow. The final pressure confirms that the chain is ready to drive interactions to end-to-end.
from langchain_core.runnables import RunnableConfig, chain
@chain
def tool_chain(user_input: str, config: RunnableConfig):
ai_msg = llm_chain.invoke({"user_input": user_input}, config=config)
tool_msgs = search_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke(
{"user_input": user_input, "messages": [ai_msg, *tool_msgs]},
config=config
)
print("✅ tool_chain defined")
We define an end-to-end tool_chain that first sends our prompt to LLM (collection of any requested tool call), then performs these calls via Search_tool.batch, and eventually feed both AI’s original message and the tool will return to LLM for a coherent response. @Chain -decorator transforms this into a single, driving pipeline so we can simply call the tool_chain.invoke (…) to deal with both thinking and searching in a single step.
res = search_tool.invoke({"query": "What happened at the last Wrestlemania"})
print("🔍 Search:", res)
We demonstrate a direct request to Dappier’s search engine in real time and ask “what happened at the last wrestlemania” and immediately print the structured result. It shows how easy we can utilize Search_Tool.Invoke to retrieve instant information and inspect the raw response in our notebook.
rec = recommendation_tool.invoke({"query": "latest sports news"})
print("📄 Recommendation:", rec)
out = tool_chain.invoke("Who won the last Nobel Prize?")
print("🤖 Chain output:", out)
Finally, we show both our recommendation and full -chain workflows in action. First, it calls the recommendation_tool.invoke with “Latest Sports News” to retrieve relevant articles from our custom data model and then print these suggestions. Next, it runs the tool_chain.invoke (“Who won the last Nobel Prize?”) To perform an end-to-end LLM query combined with real-time search, print AI’s synthesized answers and integrate live data.
Finally, we now have a robust baseline for embedding dappier AI capabilities in any conversation process. We have seen how effortless Dappier’s real -time search allows our LLM to access fresh facts, while the recommendation tool allows us to provide contextually relevant insights from proprietary data sources. From here we can customize search parameters (eg refining query filters) or fine -tuning recommendations (eg adjustment of equality thresholds and reference domain) suitable for our domain.
Check Dappier platform and Notebook here. Nor do not forget to follow us on Twitter and join in our Telegram Channel and LinkedIn GrOUP. Don’t forget to take part in our 90k+ ml subbreddit.
🔥 [Register Now] Minicon Virtual Conference On Agentic AI: Free Registration + Certificate for Participation + 4 Hours Short Event (21 May, 9- 13.00 pst) + Hands on Workshop
Nikhil is an internal consultant at MarkTechpost. He is pursuing an integrated double degree in materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who always examines applications in fields such as biomaterials and biomedical science. With a strong background in material science, he explores new progress and creates opportunities to contribute.
