Share
Explore

Lab: Building on HuggingFace Spaces with Streamlit and Gradio

Let’s start with what Streamlit and Gradio are and how you can use them to build interactive AI applications.
These libraries are popular for creating web apps quickly, especially when showcasing data science projects and machine learning models.

What is Streamlit? Streamlit:

It is designed to turn data scripts into shareable web apps in minutes.

Streamlit: A Faster Way to Build and Share Data Apps


Streamlit is an open-source Python framework that allows you to quickly build and share data apps. It turns data scripts into interactive web apps in minutes, all in pure Python and without requiring any frontend experience.
Streamlit is an open-source Python library that helps you create custom web apps for machine learning and data science.
With Streamlit, you can effortlessly share, manage, and deploy your apps directly from the platform. It provides a user-friendly interface that makes it easy to build dashboards, generate reports, or create chat apps
. Streamlit is designed to be beginner-friendly and aims to simplify the process of building web apps for data scientists and AI/ML engineers. It has a vibrant community and is free to use
. Some key features and benefits of Streamlit include:
Ease of use: Streamlit allows you to create dynamic data apps with only a few lines of code, making it accessible to users who are not web developers
.
Rapid development: With Streamlit, you can transform Python scripts into interactive web apps in minutes, significantly reducing development time compared to traditional methods.
Shareability: Streamlit provides a platform for easily sharing and deploying your apps, allowing others to access and interact with your data
.
Community support: Streamlit has an active and growing community, which means you can contribute to its future development and benefit from the knowledge and resources shared by other users.
Overall, Streamlit is a powerful tool for building and sharing data apps, particularly for those who want a fast and efficient way to create interactive web interfaces for their data projects.
Key Features:
Simple to use: You can create a web app with a few lines of Python code.
Interactive widgets: Includes widgets like sliders, buttons, checkboxes, and more that let users interact with your application.
Automatic updates: The app automatically updates the output whenever the user changes the input.
Basic Usage:
Installation: First, you need to install Streamlit using pip:
pip install streamlit
Creating an app: Write a Python script using Streamlit’s functions to create different elements of your app:
import streamlit as st st.title('Hello, Streamlit!') if st.button('Say hello'): st.write('Why, hello there!') else: st.write('Goodbye')
Running the app: Run the app by navigating to the directory of your script in your terminal and typing:
streamlit run your_script.py
This command starts a server and opens your browser to display your app.

Gradio is a servlerless iaaS

Gradio is another Python library that makes it easy to create and host UIs for prototyping your machine learning models.
Gradio is widely used for creating simple interfaces where you can input data into a model and get responses.
Key Features:
Ease of setup: Quickly create interfaces for models.
Built-in inputs and outputs: Supports a variety of input and output interfaces, including text, images, audio, and more.
Integration: Easily integrates with popular machine learning frameworks and can be embedded in Python notebooks.
Basic Usage:
Installation: Install Gradio via pip:
pip install gradio
Creating an interface:
import gradio as gr def greet(name): return "Hello " + name + "!" iface = gr.Interface(fn=greet, inputs="text", outputs="text") iface.launch()
This code snippet creates a web interface for a function that greets the user by name.
Running the interface: The launch() method opens your default web browser and serves your interface locally.

Comparison and Choice

Use Streamlit if: You’re creating data-driven applications requiring minimal frontend code, especially for data visualization.
Use Gradio if: You need a straightforward way to demo AI models, allowing users to test the model’s capabilities with different inputs.
Both libraries are designed to make it easy for data scientists and developers to showcase their work without needing extensive web development skills.
They can be integrated into larger applications or used standalone for demonstrations, experiments, or educational purposes.

image.png
megaphone

How do I actually get the code to run in a Space on HuggingFace?

Deploying and running code in a Hugging Face Space involves a series of logical steps that integrate version control systems, namely GitHub, with the deployment capabilities of Hugging Face.
Here is how you can proceed to get your code running in a Hugging Face Space:

Step 1: Prepare Your Code

Ensure your application code is ready and tested locally. Your project typically would include:
Main script: This could be a Python script that uses libraries like streamlit or gradio for the web interface.
Requirements file: A requirements.txt file listing all the necessary Python libraries your app depends on.
Model files: If your app uses machine learning models, make sure they are loaded properly either from Hugging Face's model hub or from local files.
Assets: Any additional files your application uses (like images, data files).

Step 2: Set Up a GitHub Repository

Create a Repository: If you haven’t already, create a new repository on GitHub.
Upload Your Code: Push all your project files to this repository. Make sure your main script, requirements file, and any other necessary files are included.

Step 3: Connect to Hugging Face Spaces

Create a Space: Log in to Hugging Face, navigate to Spaces, and create a new Space.
Link Your Repository: During the setup, you’ll be prompted to link your GitHub repository to your Space. Provide the repository URL and select the appropriate settings based on your application type (e.g., Streamlit, Gradio).
Environment Detection: Hugging Face Spaces will detect the appropriate environment from your repository files. For instance, if you have a streamlit_app.py file, it will configure the environment for Streamlit.

Step 4: Deployment

Automatic Deployment: Once your GitHub repository is linked, any push to your repository (like updating the code or pushing new files) triggers a deployment process.
Build and Run: Hugging Face Spaces builds your application based on the contents of your GitHub repository. This process might take a few minutes, after which your app will be accessible online via a unique URL provided by Hugging Face.

Step 5: Access and Share Your Space

Test Your App: Visit the URL provided by Hugging Face to see your app in action. Test all functionalities to ensure everything works as expected in the web environment.
Share Your Space: You can share the URL with others, allowing them to interact with your application.

Tips for Troubleshooting

Logs: If something goes wrong, check the logs in your Hugging Face Space for errors during the build or runtime.
Dependencies: Make sure all dependencies listed in requirements.txt are correct and installable.
Compatibility: Ensure that your code is compatible with the versions of Python and libraries supported by Hugging Face Spaces.
By following these steps, you can effectively deploy and run your AI applications on Hugging Face Spaces, leveraging the platform’s capabilities to make your application accessible and interactive.
Below is a minimal viable product (MVP) for a basic web application using Streamlit, which students can deploy using Hugging Face Spaces. This example will demonstrate a simple text transformation app that converts input text to uppercase.

Recipe to get your ML model up and going as a HuggingFace Space:

Step 1: Prepare the Python Script

Create a file named app.py and include the following Python code:
import streamlit as st

def convert_to_uppercase(input_text):
return input_text.upper()

st.title('Text Uppercaser')

user_input = st.text_area("Enter text to convert:", "Type here...")
if st.button('Convert'):
result = convert_to_uppercase(user_input)
st.subheader('Converted Text:')
st.write(result)

This script uses Streamlit to create a web app that has:
A title (st.title)
A text area for users to enter text (st.text_area)
A button that triggers text conversion (st.button)
Displays the converted text (st.subheader and st.write)

Step 2: Create the requirements.txt File

Next, you'll need a requirements.txt file that lists all necessary libraries. For this simple Streamlit app, the file should include:
streamlit
This ensures that Streamlit is installed in the environment where your app is running.

Step 3: Deploy on GitHub

Create a GitHub Repository: If you haven’t already, create a new repository on GitHub.
Upload Your Code: Push both app.py and requirements.txt to this repository.

Step 4: Link to Hugging Face Spaces

Create a Space: Log in to Hugging Face, navigate to Spaces, and create a new Space.
Link Your Repository: Choose Streamlit as the web framework and provide the GitHub repository URL when prompted.

Step 5: Launch and Test

Once you link your repository and push the files, Hugging Face will automatically deploy your Streamlit app. It will be accessible through a URL provided by Hugging Face.
Test the app by entering text and clicking the "Convert" button to see the text transformed to uppercase.

Conclusion

This simple project is an excellent starting point for students to understand the basics of creating and deploying web apps with Streamlit and Hugging Face Spaces. They can experiment by adding new features, changing the app's functionality, or even trying out different frameworks like Gradio.

megaphone

To complete and enrich the Minimal Viable Product (MVP) demonstration for deploying an application on Hugging Face Spaces, you can extend the existing simple example to incorporate a few additional elements that will showcase a more functional and interactive application. Here are some suggestions to consider:

Step 1: Enhance the Application

Add More Interactivity:
Incorporate Multiple Features: Expand the functionality of your app by adding options for more text manipulations (e.g., convert to lowercase, reverse text, count words).
Visual Feedback: Include visual elements like charts or graphs, especially useful if your app will eventually handle more complex data processing that benefits from visualization.
Example Enhanced Script (app.py):
python
Copy code
import streamlit as st

def convert_text(input_text, operation):
if operation == 'uppercase':
return input_text.upper()
elif operation == 'lowercase':
return input_text.lower()
elif operation == 'reverse':
return input_text[::-1]
elif operation == 'word count':
return f"Word count: {len(input_text.split())}"

st.title('Text Transformer')

user_input = st.text_area("Enter text:", "Type here...")
operation = st.selectbox("Choose operation:", ['uppercase', 'lowercase', 'reverse', 'word count'])

if st.button('Transform'):
result = convert_text(user_input, operation)
st.subheader('Result:')
st.write(result)

Step 2: Add Additional Dependencies (if any)

For a more complex app, especially one that involves data visualization or processing, update your requirements.txt file to include additional libraries:
graphql
Copy code
streamlit
pandas # If using data manipulation
matplotlib # For plotting, if necessary

Step 3: Provide Documentation

Create a README.md: Add a README.md file in your GitHub repository explaining:
What the app does.
How to run it locally.
Any special instructions or prerequisites.
Example README.md Content:
markdown
Copy code
# Text Transformer App

This application transforms input text based on the selected operation.

## Features
- Convert text to uppercase.
- Convert text to lowercase.
- Reverse text.
- Count words in text.

## Running Locally
To run this app locally, clone the repository and run the following command:

streamlit run app.py
csharp
Copy code

## Dependencies
Install all dependencies using:

pip install -r requirements.txt

Step 4: Demonstrate Real-World Application

Use Case Scenario:
Propose a real-world problem that your app could solve, like processing customer feedback or analyzing social media posts.
Demonstrate how someone might use this app in a practical setting.

Step 5: Deploy and Monitor

Monitor and Update:
Once your app is live on Hugging Face Spaces, monitor its usage.
Collect feedback and make necessary updates or fixes.
By following these steps, your MVP on Hugging Face Spaces will not only be fully functional but will also provide a comprehensive demonstration of how to build and deploy an interactive web application. This approach will help students grasp the full cycle of app development from conception through deployment and iterative improvement based on user feedback.
When it comes to choosing between Streamlit and Gradio for prototyping, there are a few factors to consider. Let's explore the information from the search results to understand the differences between the two.

Streamlit

Streamlit is an open-source framework that allows you to quickly build and deploy web applications in Python. It is designed to be easy to learn and use, especially for data scientists and machine learning engineers familiar with Python
. Streamlit provides a simple and intuitive API that makes it easy to get started
. It offers advanced customization options and a wider range of integration possibilities
. Streamlit apps are easy to share and can be deployed on various platforms, including cloud services and local servers
. Streamlit also takes care of the visual appearance of the app, making it look nice by default
.

Gradio

Gradio is another open-source Python library that is specifically designed for building machine learning model interfaces
. It aims to make it easy for developers to create user interfaces for their models, allowing users to interact with machine learning systems without needing to code
. Gradio emphasizes ease of use, multi-model deployment, and security
. It provides interactive components like sliders, dropdown menus, and checkboxes to control model inputs and display outputs using visualizations
Gradio can be integrated with popular deep learning frameworks like PyTorch and TensorFlow
.

Comparison

Based on the information from the search results, here are some key points to consider when comparing Streamlit and Gradio:
Learning Curve: Streamlit is beginner-friendly and easy to learn, especially if you are already familiar with Python
. Gradio is also relatively easy to learn, especially if you are already familiar with machine learning concepts
.
Customization: Streamlit offers advanced customization options and a wider range of integration possibilities
. Gradio may have some limitations in terms of customization and complexity, but it is suitable for simpler applications and dashboards.
Deployment: Streamlit apps are easy to share and can be deployed on various platforms, including cloud services and local servers
Gradio's deployment options are not explicitly mentioned in the search results.
ok
How deploying the ai app differs with HuggingFace spaces compared to Google Collab Notebook: how is HuggingFace better?
Deploying an AI application using Hugging Face Spaces compared to a Google Colab notebook involves different workflows and platforms with their respective strengths and specific use cases. Here’s a detailed comparison in a tabular format to highlight how these two approaches differ and the advantages of using Hugging Face Spaces for deploying AI apps.

Comparison Table: Hugging Face Spaces vs. Google Colab

How is Hugging Face Spaces Better for AI App Deployment?

Hugging Face Spaces is particularly advantageous for deploying AI applications due to several key aspects:
Turnkey Deployment: Hugging Face Spaces allows developers to go from code to deployed web app seamlessly with minimal setup, which is particularly beneficial for showcasing machine learning models.
Built for Interaction: It is designed to create interactive applications that end users can engage with directly, unlike notebooks which are primarily for code development and exploration.
Continuous Integration: Integration with GitHub for continuous deployment allows developers to update their apps automatically with new commits, making it easier to maintain and update apps.
Community Support: Hugging Face has a large community focused on machine learning and AI, providing extensive resources, tutorials, and support for deploying AI models.
Model Accessibility: Direct access to a plethora of pre-trained models from the Hugging Face Model Hub enhances the functionality and breadth of applications one can build and deploy rapidly.
These features make Hugging Face Spaces particularly suited for developers looking to create and deploy interactive, user-facing AI applications with minimal hassle and robust community support.

How is Hugging Face Spaces Better for AI App Deployment?

Hugging Face Spaces is particularly advantageous for deploying AI applications due to several key aspects:
Turnkey Deployment: Hugging Face Spaces allows developers to go from code to deployed web app seamlessly with minimal setup, which is particularly beneficial for showcasing machine learning models.
Built for Interaction: It is designed to create interactive applications that end users can engage with directly, unlike notebooks which are primarily for code development and exploration.
Continuous Integration: Integration with GitHub for continuous deployment allows developers to update their apps automatically with new commits, making it easier to maintain and update apps.
Community Support: Hugging Face has a large community focused on machine learning and AI, providing extensive resources, tutorials, and support for deploying AI models.
Model Accessibility: Direct access to a plethora of pre-trained models from the Hugging Face Model Hub enhances the functionality and breadth of applications one can build and deploy rapidly.
These features make Hugging Face Spaces particularly suited for developers looking to create and deploy interactive, user-facing AI applications with minimal hassle and robust community support.
image.png

megaphone

Fully updated instructions with Current GITHUB PAT Token authentication

Lab: Deploying an AI App with Hugging Face Spaces Using Streamlit and Gradio

## Introduction
In this lab, you will learn how to create and deploy a simple AI application using Hugging Face Spaces.
We will use both Streamlit and Gradio to demonstrate different ways to create interactive web applications.
This lab will also guide you through the latest GitHub authentication methods to ensure a smooth workflow.
## Prerequisites
- A Hugging Face account - A GitHub account - Git installed on your local machine - Basic knowledge of Python
## Step-by-Step Instructions
### Step 1: Create Accounts
1. **Hugging Face Account** - Go to [Hugging Face](https://huggingface.co/). - Click "Sign Up" and follow the prompts to create an account.
2. **GitHub Account** - Go to [GitHub](https://github.com/). - Click "Sign up" and follow the prompts to create an account.
### Step 2: Install Git
1. Download and install Git from [Git-SCM](https://git-scm.com/downloads).
### Step 3: Configure Git with Your GitHub Credentials
```bash git config --global user.name "Your Name" git config --global user.email "your.email@example.com" ```
### Step 4: Authenticate with GitHub
#### Using Personal Access Tokens (PATs)
1. Generate a PAT from your GitHub settings: - Go to [GitHub Settings](https://github.com/settings/tokens). - Click "Generate new token". - Select the necessary scopes (e.g., `repo`, `read:org`). - Copy the generated token.
2. Use the PAT when prompted for a password during Git operations.
#### Using GitHub CLI
1. Install GitHub CLI from [GitHub CLI](https://cli.github.com/).
2. Authenticate with GitHub CLI:
```bash gh auth login # Follow the prompts to authenticate ```
#### Using SSH Keys
1. Generate an SSH key pair:
```bash ssh-keygen -t ed25519 -C "your.email@example.com" ```
2. Add the SSH key to your GitHub account: - Copy the SSH key: `cat ~/.ssh/id_ed25519.pub` - Go to [GitHub SSH Keys](https://github.com/settings/keys). - Click "New SSH key" and paste the key.
### Step 5: Create a New GitHub Repository
1. Go to [GitHub New Repository](https://github.com/new). 2. Name your repository (e.g., "ai-app"). 3. Choose "Public" and initialize with a README. 4. Click "Create repository".
### Step 6: Clone the Repository to Your Local Machine
```bash git clone https://github.com/your-username/ai-app.git cd ai-app ```
### Step 7: Create the Application Files
#### Streamlit App
**app.py:**
```python import streamlit as st
def convert_to_uppercase(input_text): return input_text.upper()
st.title('Text Uppercaser') user_input = st.text_area("Enter text to convert:", "Type here...") if st.button('Convert'): result = convert_to_uppercase(user_input) st.subheader('Converted Text:') st.write(result) ```
**requirements.txt:**
``` streamlit ```
#### Gradio App
**app.py:**
```python import gradio as gr from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "microsoft/DialoGPT-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)
def chatbot(input_text): input_ids = tokenizer.encode(input_text + tokenizer.eos_token, return_tensors="pt") chat_history_ids = model.generate( input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, ) response = tokenizer.decode(chat_history_ids[:, input_ids.shape[-1]:][0], skip_special_tokens=True) return response
iface = gr.Interface(fn=chatbot, inputs="text", outputs="text") iface.launch() ```
**requirements.txt:**
``` transformers torch gradio ```
### Step 8: Commit and Push Your Changes to GitHub
```bash git add . git commit -m "Initial commit: AI application" git push origin main ```
Step 9: Deploy to Hugging Face Spaces
1. **Create a New Space** - Go to [Hugging Face Spaces](https://huggingface.co/spaces). - Click "Create new Space". - Choose a name for your Space (e.g., "ai-app"). - Select "Streamlit" or "Gradio" as the SDK. - Choose "Public" visibility. - Click "Create Space".
2. **Link Your GitHub Repository** - In your new Space, go to the "Files" tab. - Click on "Add file" and select "Repository". - Choose your GitHub account and select the "ai-app" repository. - Click "Link repository".

Step 10: Test and Share Your Application


1. **Test Your App** - Visit the URL provided by Hugging Face to see your app in action. - Test all functionalities to ensure everything works as expected.
2. **Share Your Space** - Copy the URL of your Space (e.g., `https://huggingface.co/spaces/your-username/ai-app`). - Share this URL with others, allowing them to interact with your application.


Step 11: Create a Python Client (Optional)

You now have a model being served up on HuggingFace Spaces.
How to connect to your server-based model? With a client, of course!
**client.py:**
```python import requests
def chat_with_bot(message): url = "https://your-username-ai-app.hf.space/api/predict" data = {"data": [message]} response = requests.post(url, json=data) return response.json()["data"][0]
# Test the client while True: user_input = input("You: ") if user_input.lower() == 'quit': break response = chat_with_bot(user_input) print(f"Bot: {response}") ```
Replace `your-username` in the URL with your actual Hugging Face username.
**To Use This Client:**
1. Install the `requests` library:
```bash pip install requests ```
2. Run the script:
```bash python client.py ```
3. Type messages and see the bot's responses. 4. Type 'quit' to exit the program.
**Remember to Add `client.py` to Your GitHub Repository:**
```bash git add client.py git commit -m "Add Python client for chatbot API" git push origin main ```
### Homework/Extension Activities
- Modify the chatbot to use a different pre-trained model (e.g., "microsoft/DialoGPT-medium" or "microsoft/DialoGPT-large"). - Add a feature to maintain conversation history. - Implement error handling in both the Space app and the Python client. - Create a web interface for your chatbot using Flask or Django. - Explore other Hugging Face models and create a different type of application (e.g., text classification, named entity recognition).
This lab workbook provides a comprehensive guide to creating and deploying a simple AI application using Hugging Face Spaces. Follow these steps to create your own functional AI application and share it with others.
Citations: [1] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/6136/b4f7476e-84dc-4992-b4cd-2685e9e10913/Lab_%20HuggingFace%20Spaces%3B%20Deploying%20with%20Streamlit%20and%20Gradio.pdf [2] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/6136/af1d70c3-9a61-4fd5-8ad0-243225ce7acf/Creating%20a%20Simple%20AI%20App%20using%20Hugging%20Face%20Spaces%20.pdf [3] https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-authentication-to-github [4] https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens [5] https://cli.github.com/manual/gh_auth_login
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.