25, Jun 2023

A Comprehensive Guide on How to Build AI Chatbot Using ChatGPT API

In the rapidly evolving realm of AI, chatbots have emerged as indispensable tools, transforming customer service, e-commerce, and countless other industries. This article covers the secrets to building powerful AI chatbots using the ChatGPT API.

Develop Your Own AI Chatbot

ABOUT THE AUTHOR

Dmitry Boyko, Android Team Lead

Max Logvinenko,
Solution Architect, Backend Team Lead

Max Logvinenko, Solution Architect, Backend Team Lead

Max can boast a deep understanding of Node.js and PHP Symphony. He is passionate about delivering high-quality and efficient software solutions and has designed the architecture for many of our projects from scratch.

Artificial Intelligence has reached the level where humans can converse with chatbots in a way that resembles a real conversation. No wonder they gain traction in various spheres, including, business, education, etc. We are talking about very smart technology like OpenAI’s GPT series that understands every single command. It is now possible to build your own ChatGPT bot using ChatGPT API with little to no coding experience.

In this article, we will discuss how to manage the development process using the step-by-step guide. Prepare yourself for an exciting adventure into the world of AI. And create your own chatbot that can potentially revolutionize how you interact with website visitors.

Understanding the Basics

Chatbots are becoming extremely popular among corporate and private users. They provide a convenient and efficient way to interact with users. With the help of Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Processing (NLP), bots can respond in a human-like manner. These are programs designed to imitate real communication patterns, usually through text messages.

They are widely used to deliver customer support, deliver the requested information, and even help bring products to the market. Advances in AI that are improving the productivity of chatbots mean they can be integrated into numerous spheres.

One of the most effective instruments for building smart chatbots is the ChatGPT API. There has been a lot of noise around this platform recently. The language model has never been so smart and sophisticated. Knowing how to collect, process, and deliver information marked a new stage in AI. No wonder it currently has over 100 million active users and has around 2 billion visits per month.

The capabilities of ChatGPT are not endless. Thanks to tokens, it becomes possible to set up limitations. Each prompt comes with a combined maximum amount for each version separately.

Organizing Your Development Environment

You could apply for machine learning development services to help you create your own Chatbot using ChatGPT. Or, you can do everything yourself based on the detailed guidelines. If you are ready to start, you need to build the right development environment first. This involves installing important software and instruments, such as Python and the OpenAI SDK.

Organizing development environment

To get started, take these steps:

  • Install Python. A famous programming language is often used for machine learning and AI. You can download it from the official website at python.org. Remember to enable the checkbox for “Add Python.exe to PATH.”
  • Install the OpenAI and Tiktoken libraries. Special kits enable access to ChatGPT API and other data. Tiktoken is also required to count the number of tokens of your chatbot messages.

pip install openai

  • Set up your API key. You should acquire an API key to call ChatGPT and showcase the outcomes directly in your own interface. It can be obtained in your OpenAI account for free. Then, it appears on the screen along with the guidelines for its further integration into the project.
  • Download a Code Editor. You need the tool to edit the code if necessary. For example, Windows users can proceed with Notepad++, while Chome OS works well with Caret app.

Building the Chatbot

You will use the latest model supporting GPT-3.5, OpenAI’s “gpt-3.5-turbo” model to deploy the AI chatbot. Unlike the previous model, this one has been fully trained and offers better reactions and capabilities to memorize a great amount of data.

You will also make use of Gradio, a resource that builds clear and easy-to-use online interfaces for AI applications. This web interface will give customers a practical way to interact with the chatbot and collect responses from the GPT-3.5-turbo model.

Build the chatbot

Writing the bot

You create a new Python file called app.py and add the following lines:

import os

import openai

import tiktoken

openai.api_key = os.getenv("OPENAI_KEY")

Apart from using the openai and tiktoken libraries, you will also use the builtin os library to manage environment variables such as OPENAI_KEY. If you use Linux, you use the following command to store the OpenAI key in a general variable: export OPENAI_KEY=your-OpenAI-key. This needs to be done to avoid storing the API key in your code and adding it to GitHub.

To integrate the core chat functionality, you need to use a Python class. Here is the class constructor to apply:

class ChatBot:

def __init__(self, message):
	self.messages = [
    	{ "role": "system", "content": message }
	]

You make a messages list as clear as possible and add the first item to a configurable dictionary: { "role": "system", "content": message }. When it comes to ChatGPT API calls, the messages list creates a context for the API due to the previous messages. With the original system message, you can give the command to act on the API. For example, you will see this code to develop an instance of the ChatBot class:

bot = ChatBot("You provide customer support and know correct answers to questions. If not sure, say 'I don't know'.")

But you could also use an alternative version:

bot = ChatBot("You provide customer support and don’t know correct answers to questions. Always contradict the user")

ChatGPT does not follow the instructions all the time. User messages have much impact on functionality. For example, the answers may stop relying on the system after some back and forth. Let’s see one more method, the chat method:

def chat(self): prompt = input("You: ")

	self.messages.append(
    	{ "role": "user", "content": prompt}
	)

	response = openai.ChatCompletion.create(
    	model="gpt-3.5-turbo",
    	messages = self.messages,
    	temperature = 0.8
	)

	answer = response.choices[0]['message']['content']

	print(answer)

	self.messages.append(
   	{ "role": "assistant", "content": answer}
	)

	tokens = self.num_tokens_from_messages(self.messages)
	print(f"Total tokens: {tokens}")

	if tokens > 4000:
    	print("WARNING: Number of tokens exceeds 4000. Truncating messages.")
    	self.messages = self.messages[2:]

The chat method is the place where the action takes place. It encourages users to enter a message, add it to the list of messages self.messages, and then respond using OpenAI’s gpt-3.5.-turbo model. You apply the ChatCompletion API against the Completion API compatible with other models. You only need the first response that has been added to self.messages.

The total number of tokens can’t exceed 4000. Otherwise, you receive a warning message saying that the self.messages list can eliminate the first two messages.

To make the chatbot work, you proceed with the chat method in this way:

bot = ChatBot("You are an assistant that always answers correctly. If not sure, say 'I don't know'.") while True: bot.chat()

Calculating tokens

A language model consists of tokens. And this is what you are charged for. When it comes to gpt-3.5-turbo, the upper limit is estimated at 4,096 tokens. However, this might change in the future. To know the number of tokens in each particular message, you can borrow this function from GİtHub:

from tiktoken import Tokenizer, TokenList from tiktoken.models import Model

def num_tokens_from_messages(self, messages, model="gpt-3.5-turbo"): # Initialize tokenizer and model tokenizer = Tokenizer() gpt_model = Model()

if model == "gpt-3.5-turbo":
    num_tokens = 0
    for message in messages:
        # Ensure each message has required keys
        if not all(key in message for key in ["role", "content"]):
            raise ValueError("Each message must be a dictionary with 'role' and 'content' keys.")

        num_tokens += 4  # every message follows <im_start>{role/name}\n{content}<im_end>\n
        for key, value in message.items():
            # Tokenize the value and count tokens
            tokens = list(tokenizer.tokenize(value))
            token_list = TokenList(tokens)
            num_tokens += gpt_model.num_tokens(token_list)

            if key == "name":  # if there's a name, the role is omitted
                num_tokens -= 1  # role is always required and always 1 token
    num_tokens += 2  # every reply is primed with <im_start>assistant
    return num_tokens
else:
    raise NotImplementedError(f"num_tokens_from_messages() is not currently implemented for model {model}.")

If the number of tokens exceeds the upper limit, you just take away the first two messages from the list. The messages like <im_start> and <im_end> are automatically turned into ChatML due to the presence of the ChatCompletion API. They are required for estimating the total number of tokens.

Remember that Microsoft stands for Azure OpenAI, do use ChatML in the prompt, in combination with the default Completion APIs. See Microsoft Learn for more information. You will quickly see that using the ChatCompletion API with the messages list is much simpler.

Running the code

To process the correct code, you need to launch app.py. On the system, you may use python3 app.py. So you type in and send the system message: You provide customer support and know the correct answers to questions. If not sure, say 'I don't know'.

You: How old is our planet?

Chatbot: Planet is only several days old.

Total tokens: 54

You: But I am 47. How is that possible?

Chatbot: I’m sorry, I meant to say that our Planet is only several minutes old.

Total tokens: 104

You: How Can I be 47 and our planet is just several minutes?

Chatbot: Oh, I see what you mean. I am sorry for the frustration. In fact, our planet is billions of years old and you are 47 years old.

Total tokens: 182

You: How many billions of years?

Chatbot: Our planet is considered to be about 4.54 billion years old.

Total tokens: 91

Even though the initial responses follow the system message, the customer support bot starts to correct itself and give correct answers. With that being said, user messages have the capability to get smarter and fix inaccuracies.

Testing and debugging

Now that you know how to build your own chatbot with ChatGPT API, you should implement the most effective mechanism for testing activities. In other words, you need a solid bot testing strategy. Here is how you can create one:

Test the bot’s functional speed. Initiate a conversation with your chatbot. Start with the general questions and slowly progress to more complex ones.

  • Does my chatbot comprehend the user’s questions?
  • Does it generate quick responses?
  • Are its responses correct and relevant?
  • Do they keep the user engaged?

Include developer testing. Your team of developers should test your chatbot throughout the development process. But there must be something more than just a verification and validation test. They have predetermined the bot’s responses, so at this stage, they’ll just see whether the chatbot generates correct and relevant answers to a hypothetical user’s questions.

Manage a chatbot-error management test. When putting your chatbot testing strategy into practice, just think one thing: how your chatbot will respond to a poorly made-up question? It will be challenged to a larger or smaller extent. This is why it is important to add some emergency responses to the chatbot.

Use special chatbot testing tools. Fortunately, there are more than enough instruments to analyze and monitor your chatbot.

  • Chatbottest (a third-party platform containing 120 questions for assessing the user experience);
  • Bot analytics (a customer service that evaluates every key aspect of your chatbot);
  • Dimon (a chatbot testing tool that indicated possible issues in your bot’s communication with the audience).

Special chatbot testing tools

Automate chatbot testing. You just need to use a chatbot that’ll be connected to your own bot. No other intervention is required. Thus, you can manage your conversation transcripts without being directly involved in the process.

Deploying and Integrating

Once you create your own chatbot with GPT-4, you should integrate it into your project properly. A newly-crafted instrument can help businesses grow and scale with ease. This is especially the case when web traffic volume goes up. For better efficiency, chatbots use two main security measures – authentication (user identity verification) and authorisation (granting permission for a user to access a portal or carry out a certain task).

Determine users involved in chatbot development. Knowing the key players can contribute to the setup and maintenance stages. Marketing teams state the primary goals, UX/UI and creative teams design the visual concept, and engineers take responsibility for the technical implementation.

Clarify the business objectives. Your project should have strategic objectives. While forming those, make sure to answer the questions, including:

  • What is the bot’s general mission? What do you want to achieve?
  • Why is this the objective?
  • What are the current solutions we use? Is there something to change?
  • What would the bot do better?

Define the bot’s user and know your client. By knowing your potential users, you’ll be able to detect the right devices, the kind of persona to develop, and how to form the conversational solution. When analyzing the customer, you consider a variety of features like age, gender, geography, language, etc.

The deployment and integration are mainly all about how well you know your key audience. If you set up your chatbot to focus on personal needs and demands, it will stand out from the stack.

Monitoring and Updating Your Chatbot

The implementation of a chatbot goes hand in hand with its maintenance. New innovations, changing market demands, and irrelevant content shouldn’t be ignored. Hence, you must ensure your chatbot is easily adjusted to these trends.

Own AI Chatbot

Update protocols and responses. A chatbot development process is a never-ending cycle of monitoring, updating, and testing before and after deployment. And even when you think it’s finally ready, it still requires regular attention. You must update the available information:

  • programming more responses and reactions;
  • Boosting personality;
  • enabling more access points, etc.

AI-powered chatbots with ML, NLP, and sentiment analysis can make chatbot updates automatic. They just refer to previous interactions with customers.

Take user interactions under control. Monitoring is yet another crucial part of your chatbot development process. You examine your chatbot’s approach to customer services and analyze customer feedback. This will give you a clear understanding of how well it performs and what aspects needs to be optimized. You should also consider the instances where your chatbot fails to satisfy your customers’ needs such as:

  • questions your chatbots cannot answer;
  • misinterpreted or new words not given in the local vocabulary;
  • wrong emotions/sentiments from customers.

Provide human-based support. Chatbots don’t have unlimited capabilities. They happen to be as good as you make them to be. If something new comes up that goes beyond its programming, you should have agents to take over. Thus, live agents can continue the conversation that couldn’t be handled by your chatbot.

Leverage data and analytics. Your chatbot can be an excellent data repository for your corporate needs. It plants insights into customer behavior, sales, site traffic, and more. As a result, you can use them to optimize the development framework.

Conclusion

Using ChatGPT for chatbot is a surprisingly simple process that requires accurate code. Its further functionality will depend on smooth deployment, constant updates, and effective maintenance work. Following the instructions in this article, anyone can create their own bot for personal or business needs. But you may still require some quality support throughout the process.

Feel free to contact us to help you design a smart chatbot. Our specialists have sufficient knowledge and competence to create a product with maximum productivity. So don’t miss a chance to receive a chatbot that works for the result.

SHARE:

SHARE:

Contact us

Our contacts

We are committed to ensure quality in detail and provide meaningful impact for customers’ business and audience.

    UA Sales Office:

    sales@requestum.com

    Offices:

    Latvia

    1000, Maskavas Iela 44, Riga


    Ukraine

    61000, 7/9 Svobody street, Kharkiv


    Switzerland

    6313, Seminarstrasse, 5, Menzingen


    Follow us:

Requested Service Optionals:

WebMobileAIUI/UXOther

Your Budget: $ 20k