Share
Explore

Creating a Simple AI Language Model on Hugging Face Spaces

The 2 Toolsets and Workflows to build the AI ML MODEL:

Google Collab Notebooks:

→ Code writing Tool / gives you lots of good supports for smithing your AI Code
→ What GCN does NOT do? It does not provide a Model Server environment for other people to connect to your Model and use it.

HuggingFace Spaces

HFS does not offer a code smithing environment:
Because HFS wants to be a professional grade project environment : Which MEANS:
They want to do CI / CD.
Continous Deployment MEANS we run a GIT HUB PIPELINE.
HFS is a MODEL serving environment: You create your PYTORCH Tensor File (ML MODEL) : and deploy it in a format that you can circulate the URL for other people to connect to and use your Model

image.png


You will deliver your Assignment using Google Collab Notebook

Deliver your Project using HuggingFace Spaces
You will deliver your Project by creating a CI CD Pipeline on GITLAB: I will interact with the teams by opening GIT ISSUES on your Code and you will open GIT Actions on those ISSUES by making GIT Actions.

megaphone

Checklist of to do items to get setup:


Setup an Account on HuggingFace Spaces
Setup an Account GITHUB

You should have the GIT cli command line interface installed in your Operating System:

image.png

megaphone

Hugging Face Spaces refers to the Spaces library developed by Hugging Face, a company that specializes in natural language processing (NLP) and machine learning (ML) applications.

The Spaces library provides an easy-to-use interface for processing, converting, and training neural models, while also integrating with various Hugging Face tools and platforms, such as the Transformers library and the Hub.

With Spaces, users can work seamlessly with popular NLP and ML frameworks, such as TensorFlow, PyTorch, and Scikit-Learn, and easily share and deploy models on various environments, such as cloud platforms and mobile devices.


Lab outcome:

Creating a simple AI language model on Hugging Face Spaces involves several steps:
1. Setting up your environment
2. Preparing your data
3. Building and training the model
4. Deploying it on Hugging Face Spaces.
Here's a step-by-step guide:
When you install the necessary libraries and set up your environment on your local machine, you are preparing your development environment to build and test your AI language model locally.

Here's how the connection between your local machine and Hugging Face servers works throughout the process:

Local Development
Set Up Your Environment Locally: (Open a Command Terminal and make a directory)

You install libraries like transformers and datasets on your local machine.

You can then use these libraries to develop and train your AI models locally.

Interacting with Hugging Face Hub
Download Pre-trained Models and Datasets:

When you use commands like GPT2LMHeadModel.from_pretrained('gpt2')
or
load_dataset('wikitext', 'wikitext-2-raw-v1'),

your local environment connects to Hugging Face servers to download pre-trained models and datasets.

Training and Fine-tuning Models Locally:

You train or fine-tune the models on your local machine using your datasets and compute resources.

Pushing to Hugging Face Hub

Upload Trained Models:
After training, you can push your trained model to Hugging Face Hub using commands like model.push_to_hub('your_model_name').

This uploads your model from your local machine to the Hugging Face server, making it available in the cloud.

Deployment on Hugging Face Spaces {Using HuggingFace Spaces as a Model Server}

Create and Deploy Space:

You create a new Space on Hugging Face, which is essentially a cloud environment provided by Hugging Face for hosting your applications.

You clone this Space repository to your local machine, add your code (e.g., a Gradio or Streamlit app), and push it back to Hugging Face.

Hugging Face then hosts your application in the cloud, making it accessible to others via a URL you provide to them.

Summary of Connections:

Downloading Models/Data:

Your local machine connects to Hugging Face servers to download pre-trained models and datasets.

Pushing Models:

Your local machine uploads trained models to Hugging Face servers.

Deploying Applications:
You push your application code to Hugging Face Spaces, and it gets hosted in the cloud.

Example Workflow {start by openning Command Terminal and making a Directory}

Install Libraries Locally:

bash

pip install transformers datasets

Download Model/Data:

python
### Make a Python Source Code file containing this code:

from transformers import GPT2LMHeadModel, GPT2Tokenizer
from datasets import load_dataset

model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
dataset = load_dataset('wikitext', 'wikitext-2-raw-v1')
Train Model Locally:

python
Copy code
# Your training code here
Push Model to Hub:

python
Copy code
model.push_to_hub('your_model_name')
tokenizer.push_to_hub('your_model_name')
Create and Deploy Space:

Clone the Space repository, add your code, and push it to Hugging Face:
bash
Copy code
git clone https://huggingface.co/spaces/your_space_name
cd your_space_name
# Add your code
git add .
git commit -m "Initial commit"
git push
By following these steps, you effectively connect your local development environment to Hugging Face's cloud services, enabling you to develop, train, and deploy AI models efficiently.

megaphone

Creating a Virtual Environment and Setting Up Libraries

Creating a virtual environment is a best practice when developing a project to ensure that dependencies are managed cleanly.
Here’s a detailed guide for creating a virtual environment, installing necessary libraries, and setting up your project:

Step-by-Step Instructions

Step 1: Set Up a Virtual Environment

Open a Command Terminal:
On Windows: Open Command Prompt or PowerShell.
On macOS: Open Terminal.
On Linux: Open Terminal.
Make a Project Directory:
Navigate to the directory where you want to create your project folder.

mkdir huggingface-language-model
cd huggingface-language-model

Create a Virtual Environment:
Use venv to create a virtual environment.
python -m venv venv

Activate the Virtual Environment:
On Windows:
venv\Scripts\activate

On macOS/Linux:
bash
Copy code
source venv/bin/activate

Step 2: Install Necessary Libraries

Install Libraries:
With the virtual environment activated, install the required libraries.
pip install transformers gradio datasets

Step 3: Set Up Your Project Files

Create the Main Script (app.py):
Inside your project directory, create a file named app.py and add the following code:
import gradio as gr
from transformers import pipeline
# Load pre-trained model and tokenizer
generator = pipeline('text-generation', model='gpt2')
def generate_text(prompt):
response = generator(prompt, max_length=50, num_return_sequences=1)
return response[0]['generated_text']

interface = gr.Interface(fn=generate_text, inputs="text", outputs="text")
interface.launch()

Create a Requirements File (requirements.txt):
List the dependencies required for your project:

gradio
datasets

Create a README File (README.md):
Provide a brief description of your project:
markdown
Copy code
# Hugging Face Language Model
This project contains a simple AI language model using GPT-2, deployed with Gradio.

Step 4: Use GitHub for Version Control

Initialize Git Repository:
Inside your project directory, initialize a Git repository.

git init

Create a GitHub Repository:
Go to and sign in to your account.
Click on the + icon in the top right corner and select New repository.
Name your repository (e.g., huggingface-language-model), add a description, and set it to Public.
Click Create repository.
Add Remote Repository:
Add the GitHub repository as a remote to your local repository.
git remote add origin https://github.com/ProfessorBrownBear/huggingface-language-model.git
image.png
Example:

Commit and Push Your Changes:
Stage all the changes:
bash
Copy code
git add . done

Commit the changes with a message:

git commit -m "Initial commit with Gradio app and dependencies"

Push the changes to GitHub:
git push -u origin main

Step 5: Create a Space on Hugging Face

Navigate to Hugging Face Spaces:
Go to and log in to your account.
Create a New Space:
Click on the Create new Space button.
Fill in the details:
Name: huggingface-language-model
Type: Gradio
Hardware: CPU (GPU if needed)
Visibility: Public or Private
Click on Create Space.
Clone the Space Repository to Your Local Machine:
Copy the URL of your new Space repository.
Clone the repository:
bash
Copy code
git clone https://huggingface.co/spaces/your-username/huggingface-language-model-space.git
cd huggingface-language-model-space

Copy Project Files to the Space Repository:
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.