Building a Chatbot With Free Gemini Pro API

Aman Kumar
6 min readApr 23, 2024

--

Introduction

Chatbots have become increasingly popular for customer service, answering queries, and engaging users across platforms like websites, messaging apps, and social media. If you want to offer a chatbot service, you typically collect user question data and train a model from scratch. However, powerful large language models (LLMs) like Google’s Gemini Pro can simplify this process.

In this tutorial, we’ll learn how to build a question-and-answer (Q&A) chatbot using the Gemini Pro API — Google’s cloud-based natural language processing platform for building conversational AI applications.

You can find the complete code written here.

What is Gemini?

Gemini is an LLM developed by Google focused on advancing language understanding and processing. It is part of Google’s machine learning tools, designed for complex natural language tasks like understanding, translation, generation, and more. Gemini technology enhances user interactions across various Google products and services.

Introduced in December 2023, Gemini is based on the Text-to-Text Transfer Transformer (T5) model family. Key points about Gemini:
- Highly flexible model efficient across data centres and mobile devices
- State-of-the-art capabilities for scaling AI development and deployment
- Used in products like Google Translate, Search, and Gmail
- Available in three variants: Gemini Ultra, Gemini Pro, and Gemini Nano

What is Gemini Pro?

Gemini Pro is an advanced, more extensive version of the base Gemini model, also part of the T5 family. If Gemini is the clever language wizard, Gemini Pro is the wizard with upgraded powers. It has enhanced capabilities and performance compared to the base Gemini model.

Advantages of Using Gemini Pro API

- Integrate advanced language models into applications
- Create dynamic, context-aware chatbots with intelligent responses
- Leverage state-of-the-art natural language processing capabilities

Steps to Build a Q&A Chatbot with Gemini Pro API

Step 1: Set Up Gemini Pro API

  • Click Get API Key
  • Create the API key.
  • Copy the key

2. Create Project Directory and virtual environment for the project

  • First Create Directory
  • open the terminal and paste it
mkdir chatbot
cd chatbot
  • Create ./venv (Virtual environment)
conda create -p ./venv python=3.11 -y

## If using python venv
python -m venv .venv
  • Activate the environment
conda activate ./venv

## Else
source ./venv/bin/activate

3. Create .env file to store the API key as an environment variable

  • Create the .env File: Open a text editor and create a new file. Save it with the name .env. Ensure no file extension (like .txt) and the file starts with a dot.
  • Add Environment Variables: You can define your environment variables in the file. For example:
GOOGLE_API_KEY=your_google_api_key

4. Install the required Python packages

  • Create a requirements.txt file
touch requirements.txt

Save the packages below.

streamlit
google-generativeai
python-dotenv
  • Installing the packages
pip install -r requirements.txt

5. Write chatbot code using Streamlit and google-generativeai

Create app.py: create app.py file

touch app.py

Paste the code below: (we will go through what the code does next)

## loading all the environment variables
from dotenv import load_dotenv
load_dotenv()

import streamlit as st
import os
import google.generativeai as genai

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

## function to load Gemini Pro model and get repsonses
model=genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
def get_gemini_response(question):

response=chat.send_message(question,stream=True)
return response

##initialize our streamlit app
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")

# Initialize session state for chat history if it doesn't exist
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []

input=st.text_input("Input: ",key="input")
submit=st.button("Ask the question")

if submit and input:
response=get_gemini_response(input)
# Add user query and response to session state chat history
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")

for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")
  • Load Environment Variables: This part of the code uses the dotenv library to load environment variables from a file named .env in the current directory. This is a common practice to keep sensitive information, such as API keys, separate from the code.
from dotenv import load_dotenv
load_dotenv() ## loading all the environment variables
  • Import Libraries: Here, you import the necessary libraries. streamlit Is used for creating interactive web applications, os is a standard library for interacting with the operating system, and google.generativeai is the module providing the Gemini AI functionality.
import streamlit as st
import os
import google.generativeai as genai
  • Configure Gemini AI: This line configures the Gemini AI by setting its API key. The API key is retrieved from the environment variables using os.getenv.
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
  • Initialize Gemini Pro Model: You initialize the Gemini Pro model to generate responses. It seems like you’re creating a chatbot using the Gemini Pro model.
model = genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
  • Define Function for Getting Responses: This function sends a user’s question to the Gemini Pro model and retrieves the response. It looks like the responses are obtained in a streaming fashion.
def get_gemini_response(question):
response = chat.send_message(question, stream=True)
return response
  • Initialize Streamlit App: These lines initialize a Streamlit app, setting the page title and displaying a header.
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
  • Initialize Session State for Chat History: This checks if the chat history exists in the Streamlit session state. If not, it initializes an empty list to store the chat history.
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
  • Take User Input and Display Responses: Here, you create a text input field for the user to input a question. When the user clicks the “Ask the question” button, it triggers the get_gemini_response function, displaying the responses and updating the chat history.
input = st.text_input("Input: ", key="input")
submit = st.button("Ask the question")
if submit and input:
response = get_gemini_response(input)
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
  • Display Chat History: Finally, the code displays the chat history, showing the interactions between the user (“You”) and the chatbot (“Bot”).
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")

This line imports the load_dotenv function from the dotenv module. This function loads environment variables from a file (in this case, the .env file).

6. Run the chatbot application

  • Run the Streamlit application using the following command:
streamlit run app.py
Run Streamlit App
Gemini Demo App Initial State
Gemini Demo App running

You can find the completed code here.

Benefits of a Conversational Q&A Chatbot

- 24/7 availability for user queries
- Efficient problem resolution and improved user experience
- Cost savings by reducing human intervention
- Data collection and analysis for insights
- Adaptability and continuous improvement

Conclusion

This guide covered building an intelligent Q&A chatbot using Google’s powerful Gemini Pro API. You learned to set up a free API account, create a conversational chatbot, and organize your project with a virtual environment. Now, you can bring your chatbot ideas to life and provide engaging conversational experiences.

Next Steps

We have created a straightforward version of a chatbot; there are plenty of areas we can work on to improve this app

  • Customize the chatbot’s persona
  • Connect to datastores for enhanced knowledge or domain-specific information retrieval for specialized chatbot
  • Play around using different LLMs other than gemini-pro
  • Making the chatbot accept multimodal inputs like files and images.

Follow me on Medium for more. Let’s connect on Twitter. More About me here.

--

--

No responses yet