Generative AI Series

Retrieval Augmented Generation(RAG) — Chatbot for database with LlamaIndex (Text2SQL)

Implement the RAG technique using Langchain, and LlamaIndex for conversational chatbot on a database, using Text-SQL.

A B Vijay Kumar
4 min readFeb 7, 2024

--

This blog is an ongoing series on GenerativeAI and is a continuation of the previous blog, which talks about the RAG pattern and how RAG is used to augment prompts and enhance the content and context of an LLM, with specific data

Introduction

I covered the RAG pattern in my previous blog “Prompt Engineering: Retrieval Augmented Generation(RAG)”. I introduced LlamaIndex to implement RAG, and we built a chatbot to ask-the-documents in another blog. In this blog, we will explore how we can build a chatbot that converts chat conversations to SQL and queries the database.

Configuration

I have installed Postgres on my MacBook and created 2 tables.

  • product_master: Where I store the product_id and the name of the product.
  • inventor: where I use the product_id and store the inventory of that product.

The following screenshots show the schema and the data that I inserted

Lets now create a requirements.txt with all the dependent libraries that we need for this project.

python-dotenv
openai
llama-index
streamlit
nltk
sqlalchemy
psycopg2

The key libraries here other than llama-index are sqlalchemy, which is the Python SQL toolkit, and psycopg2, which is the adapter we will be using to connect to Postgres.

Let's set the environment variables in .env file. In the env file, we provide the OPENAI_API_KEY and the database variables, as shown below

OPENAI_API_KEY=<<YourOPENAI key>>
DB_USER=orderadmin
DB_PASSWORD=orderadmin
DB_HOST=localhost
DB_NAME=postgres

Let's now walk through the application code.

Most of the libraries are standard, which we already covered in the previous blog. Except the following, which are important

  • sqlalchemy: This is the Python SQL toolkit, for Object Relation Mapping. We will be using this to connect with our Postgres.
  • SQLDatabase: this object is a wrapper for sqlalchemy
  • NLSQLTableQueryEngine: This is the most important component, which implements the Natural Language SQL query engine.

We are loading the environment variables (22), and initiating the storage path and URL. We are using a standard Postgres URL, using psycopg2 an adapter (25). in Line 27, we are creating the OpenAI object and passing it to the ServiceContext (28), to make LlamaIndex use the requested LLM for RAG. We are then creating the the SQLDatabase object passing the names of the tables and and query engine, which we will be using for querying the database, using natural language.

The above method queries the SQL query engine, by passing the prompt.

The following code is to create a chat interface using streamlit. This is repeated from my previous blog, please refer to the blog for details on what we are doing.

Let's now run the application.

Run the application

Since it's a streamlit application we use streamlit runcommand to run the application. This will launch a web page, which is shown below

The following screenshot show the chat, and how the prompt is used and translated to SQL, and retrieving the records from database.

Conclusion

There you go…As you can see this is super easy to implement complex RAG-based TEXT2SQL using LlamaIndex. Please leave your feedback, and comments. I always learn from hearing from you all…

you can find the complete code in my GitHub here

We are just scratching the surface. I will be blogging about more features in future blogs. Until then, stay safe and have fun… ;-)

--

--

A B Vijay Kumar

IBM Fellow, Master Inventor, Mobile, RPi & Cloud Architect & Full-Stack Programmer