Create a ChatGPT Powered Web App to Chat With Historical Philosophers, Part 2: Deploying to Streamlit

Josh Johnson
13 min readJun 25, 2023

--

Recap

Welcome to Part 2 of our series on deploying a ChatGPT-powered web application to a Streamlit site. If you haven’t yet gone through Part 1, I recommend you do so here before proceeding.

In this segment, we’ll be focusing on utilizing the knowledge gained from the previous article to implement our application on a Streamlit website.

For part 1, see this link.

Streamlit

Streamlit is an intuitive Python package that facilitates the creation of web applications capable of running directly in your browser. It’s particularly designed for data science applications, offering a seamless experience when deploying models or managing databases through APIs.

Whether it’s interacting with remote databases like Snowflake or Amazon Web Services, running inferences from local or cloud-deployed models, or creating API endpoints for models hosted on AWS servers, Streamlit provides a versatile platform. An added bonus is the Streamlit community cloud, which allows you to deploy your apps publicly, like the one pictured above. You can access it here!

To get started, you will need the following:

Requirements:

  1. Streamlit community cloud account: This is where you can deploy your apps. Sign up for a free account, which allows you to deploy unlimited public apps and one private app accessible by invitation only.
  2. Python and the Streamlit Python Package: Install the Streamlit package from your Python terminal using the pip install streamlit command.
  3. GitHub Account: This isn’t necessary for creating an app to run on your local machine. However, if you wish to deploy to the Streamlit Community Cloud, a GitHub account is essential.
  4. Material from Part 1: Ensure you’ve read and understood Part 1, as it contains the essential code needed to interact with the ChatGPT API.

Now that we’re all set, let’s dive in and continue our journey to deploying a ChatGPT-powered web application on Streamlit!

Getting Started

In Part 1 of this series, you crafted the aiagent.py file, a vital component for interacting with ChatGPT. Now, in Part 2, we're going to develop the code that instructs Streamlit on how to construct your app.

Once you’ve initialized a new Python file, you’re ready to start programming. The initial steps involve importing the Streamlit package and the AIAgent from the aiagent.py file, which should reside in the same directory.

Session ID and Cached Resources

Streamlit’s operation needs some clarification. When a live app runs on Streamlit, it executes the entire file whenever there’s a change. This constant refreshing can pose challenges, particularly when you’re initializing variables that are expected to change later. For instance, any alteration in user input or slider bar movement will reset every global scope variable.

There’s ongoing dialogue in the Streamlit community about the issue of session persistent variables. As of now, Streamlit hasn’t implemented a direct solution for this, which is primarily because it’s not designed for this kind of operation.

However, there’s a feature called st.session_state that you can use as a dictionary to store function-generated variables. But take note: these are not session-specific variables. They persist across all sessions and ALL USERS. For instance, if you save your chat history as st.session_state['history'], every user visiting your site will be able to view it - a clear privacy concern. So, this method is NOT recommended.

Here’s the two-part solution we propose: leveraging cached resources and a session ID.

Cached Resources: Streamlit, being optimized for data science, discourages repeatedly recreating resources like inference models. Instead, it provides a function decorator, @cache_resource, that caches resources, avoiding their recreation. As long as the function inputs remain consistent, the site will use the cached version. However, these cached resources will also be shared, like items stored in st.session_state. This is convenient if the resources are stateless, but it can also lead to session bleed-through.

Cached resources will also be shared, like items stored in st.session_state. This is handy as long as those resources are stateless. But, otherwise you get the same bleed through to other user sessions.

Session ID: It’s possible to fetch a unique session ID that changes with each new tab opened running a Streamlit app. The code provided below will retrieve that ID.

Combining Them: Remember, cached resources aren’t recreated if the inputs don’t change. This means that as long as the Session ID remains constant, any cached resource with that input will stay cached. Conversely, if the Session ID changes, the resource will be re-initialized.

We can exploit this behavior to create a session-specific AIAgent whose attributes do not bleed into other sessions. This offers a convenient way to store variables like chat histories. If you recall the AIAgent code from Part 1 of this series, you'll remember that the class has several attributes like self.prefix, self.response, and self.history. These correspond to the prefix for subsequent prompts (to maintain consistency in chat), dynamic text field content that changes with user interaction, and chat history, respectively.

Functions:

Streamlit executes the entire code file every time a user interacts with the app. Therefore, it’s recommended to encapsulate as much of your code into functions as possible to ensure smoother code flow. We’ll start with three essential functions.

AIAgent Instantiation Function: This function is responsible for creating an instance of our AIAgent and initializing the AIAgent.response attribute. This response will act as the initial value for a dynamic text block we'll create later. As we want this text to adapt and not revert to its initial state with every user interaction, we store it in the cached AIAgent object. Although the function doesn't directly use the session_id, it's included as an input to trigger the caching decorator to reinstantiate the AIAgent every time a new user opens the app or the same user refreshes the page.

Chat History Formatting Function: If you remember from Part 1, the openai.ChatCompletion.create method sends a list of dictionaries to the ChatGPT API. Each dictionary represents either a user prompt or a model reply. This arrangement allows the remote model to maintain a 'memory' of the conversation, despite this memory actually residing on our end. This function accepts the message history and formats it for the user.

Query Function: This is the main function where the real action happens. You’ll notice an object named ‘current_response’ here. This is a Streamlit widget that will display our dynamic text cell. Upon first page opening, the widget displays the text: “Philosophers are waiting patiently, possibly smoking a cigar or pipe”. It’s crucial to verify that this variable has a value other than None before sending the query to the model. This variable is a global one, generated by a text input widget, which we will explain later.

Querying the remote ChatGPT model takes some time, resulting in a slight delay between the prompt and response. During this waiting period, the dynamic text box will display a message saying, “The forum is considering your query and will send a representative soon.”

At times, ChatGPT may be overwhelmed and return an error. We’ve implemented a try-except block to manage such scenarios. If an error occurs, the dynamic text box will display the message: “I’m sorry, all philosophers are busy helping other wisdom seekers. Please try again later.”

If the query is successful, the model’s reply will appear in the dynamic text box and will also be added to the message history, which is displayed on the sidebar.

One of our objectives in utilizing the AIAgent to store variables is to maintain a text box that can change without resetting each time Streamlit runs the page. It turns out that creating such a text box that can change its values without resetting isn't straightforward. Initializing the value in the function under the @cache_resource decorator and storing the value in the AIAgent allows us to navigate Streamlit's native behavior and the challenge of persistent variables effectively.

Creating the Widgets for the App:

Streamlit simplifies the process of creating widgets for your app. Each type of widget corresponds to a class, and they are arranged on the page in the order they appear in the code.

Title:

This is pretty straight forward. We use the st.title()method to create a header for our page.

User Text Input:

This is where it gets interesting. We can create an st.text_input widget to allow our users to input their questions. The widget code displays the results of the user input. The value of 'prompt' is None until the user enters a value.

The widget provides options to set a label, a help icon with a mouse-over value, and placeholder text. I’ve set the value to an empty string so that the box doesn’t contain any initial text, opting to use the placeholder argument instead to create faint text that isn’t an actual input value. Alternatively, you could set the value to be an initial prompt if you wanted a default input.

Instantiate the AIAgent:

This code will instantiate a new AIAgent, but only if one hasn't been created yet and the value of session_id hasn't changed. Otherwise, it utilizes the cached version.

Input Button

To provide more control over the flow, I’ve used a button to trigger the query to the model. The st.button widget allows us to assign some text to the button and define what occurs when it is clicked. In this case, it will execute the query_agent() function to send the text from the text input widget to the ChatGPT API.

Sidebar

Streamlit makes it simple to add a sidebar or top bar to your app. We simply utilize the sidebar functions. In our case, we’ll add a slider to alter the “temperature” of the model reply, a button to clear the chat history and initiate a new conversation, and a text widget to showcase the conversation history for the user.

st.sidebar.temperature

This widget allows the user to adjust a value within a specified range. We utilize this to manipulate the “temperature” of the ChatGPT model’s reply.

“Temperature” refers to the randomness level in a model’s response. A setting of 0 results in identical replies each time. Although the ChatGPT model allows temperatures up to 2, I restrict users to a maximum of 1 to avoid nonsensical responses. With a temperature of 2, the response often comprised mashed-up words and irrelevant word parts. To ensure sensible responses, I’ve set 1 as the maximum on the slider.

st.sidebar.button

The button here triggers the AIAgent.clear_history method, which resets the AIAgent to its initial message history, including the system message, initial prefix, and response for the dynamic text cell.

st.sidebar.markdown

This widget places a markdown text element on the sidebar to display a formatted version of the AIAgent.history attribute. Despite not using any markdown syntax, I prefer the markdown widget over the text widget because it formats more pleasingly and wraps itself to fit the sidebar. I'm quite fond of the markdown widget.

The Dynamic Text Cell

This seemingly simple line of code works because we employed the strategy of utilizing a cached resource, a session ID, and storing values in the AIAgent object, as opposed to having variables scattered throughout the global namespace. The text changes depending on if the user has not yet made a query, is waiting for a reply, the query failed, or shows the most recent response from the model.

This text will appear at the bottom of the main page, since it is the last widget defined. It’s important to remember that the creation order of widgets determines their appearance order on the page.

Deployment

Streamlit’s deployment process is a standout feature, providing ease and flexibility. It’s capable of fetching code directly from GitHub to run on its servers, and even detects new commits, thereby allowing dynamic updates to your app as your code evolves.

This functionality makes it surprisingly feasible to build your app using GitHub’s text editing web interface. You’ll have the advantage of observing changes on your Streamlit site immediately after each commit. It’s worth noting, however, that if your app has active users, frequent changes and updates could be disruptive to their experience.

Now let’s see how to get this baby on the web.

GitHub Repository

Begin by setting up a GitHub repository for your project, which will need to include certain specific files. Here is a link to my repo for this project, if you want to fork or explore it.

Streamlit primarily requires one file to function: app.py. This is the file where Streamlit will search for the code needed to construct your app. Keep in mind, however, the file doesn't necessarily need to bear the name app.py; just ensure that you remember which file contains the code for your Streamlit app.

In many instances, it’s also critical to incorporate a requirements.txt file. This is particularly necessary if your app needs to import any specific libraries to operate. The app we're discussing requires three libraries: streamlit, openai, and aiagent.

The Streamlit site already incorporates the streamlit package, and the aiagent is a custom package included within the repository. However, it's important to notify Streamlit to import the openai API package.

You are now ready to create a Streamlit account, link it to your GitHub repository, and deploy your shiny new app!

Deploy to Streamlit

If you haven’t already, start by creating a Streamlit account here. I used the ‘Sign in with GitHub’ option, since I’m connecting my account to GitHub anyway.

Once you’ve done that, click the big blue “New app” button.

This will bring up the page pictured below

The default app deployment page is an ‘Interactive Picker’. Streamlit peruses your GitHub and will look for likely repos for deployment. You should be able to find your app repo by clicking in the first box and selecting the one you want to deploy.

If not, you can select “Paste Github URL” here and paste in the URL to the app.py file in your repository. I actually find this to be the easier way.

You can choose a URL for your deployed app in the 2nd box.

But wait! Don’t deploy it yet! We have a secret…

Click on ‘Advanced settings’ to bring up the next interface.

This section is important.

First off, be sure that Streamlit is using the same version of Python you used to create your app. In this case I used 3.9, but you should make sure it matches the version you used.

Next are the secrets. Streamlit secrets are how we can securely store things like passwords and api keys. The app can use them, but visitors can not see them. Since the openai API requires an API key, this is where you’ll put it.

You may recall from Part 1 of this tutorial that we uses openai.api_key = streamlit.secrets['OPENAI_API_KEY'] to set the API key to allow us to make API calls that will be charged to our account. Be sure of 2 things:

  1. The dictionary key you use in your code to access the secret API key matches the key you set here.

2. You’ve added quotation marks around the API key you want to store.

Now, you are ready to deploy and enjoy!

If you run into errors, you can get some helpful outputs in the Manage app pull-out. You can see this when you are logged in, but other visitors will not see it.

You can adjust the code for your app directly from the GitHub editor if you wish to debug from there, or you can deploy it locally on your machine for debugging, then push the working copy to your remote GitHub repository.

The 2nd option is probably better, especially if you expect anyone else to be interfacing with your app.

Conclusion

In this tutorial, we delved into the process of deploying an app via Streamlit by linking a GitHub repository to your Streamlit account. This process also illuminated how to effectively manage the control flow of your Streamlit app, keeping in mind that the entire code file is re-executed each time a user interacts with the deployed app.

You discovered the use of session IDs and class attributes of cached resources as a strategic approach to retain information and prevent data leakage between users. This method enables your app to exhibit dynamic responsiveness and maintain a pseudo-memory state, despite Streamlit’s inherent stateless design.

Moreover, we explored how to leverage widgets to enhance user interaction and provide real-time control over the app’s behavior. Importantly, we learned how to circumvent the challenges associated with Streamlit’s statelessness, such as implementing a dynamic text box that can retain changes, using a combination of cached resources, a session ID, and storing values in an object rather than having them in the global namespace.

We covered some crucial aspects of app deployment, including the importance of the app.py file and the necessity of a requirements.txt file for specifying library dependencies. All these insights equip you to create more robust, interactive, and user-friendly applications with Streamlit.

Finally, you saw how to deploy your app live to the web for others to enjoy.

Links:

Project Repository: https://github.com/Caellwyn/chat-with-a-philosopher/tree/main

Deployed Streamlit App: https://caellwyn-chat-with-a-philosopher-app-hgvk78.streamlit.app/

OpenAI API Reference: https://platform.openai.com/docs/api-reference/completions/create

Streamlit API Reference: https://docs.streamlit.io/library/api-reference

--

--

Josh Johnson
Josh Johnson

Written by Josh Johnson

I'm a data scientist with a background in education. I empower learners to become the folks they want to be.

No responses yet