Share
Explore

Building with Huggingface API Lab Instruction Document

Background Reference Materials:


Start by making an account at:
megaphone
Hugging Face is a popular platform for building, training, and deploying state-of-the-art models powered by open-source and open data science science models.
This platform library provides a wide range of tools and APIs for natural language processing (NLP) and machine learning (ML) tasks, including the Hugging Face Transformers library, which is a powerful tool for working with pre-trained models and fine-tuning them on specific tasks.
To write Python AI programs using the Hugging Face API, you can use the Transformers library, which provides a wide range of pre-trained models for various NLP tasks, including text classification, question answering, and language generation. You can also fine-tune these models on your own data to improve their performance on specific tasks.
Additionally, Hugging Face provides several APIs and tools for working with ML models, including the Inference API, which allows you to serve your models directly from Hugging Face infrastructure and run large-scale NLP models in milliseconds with just a few lines of code.
You can also use the Transformers Agent API to generate images and read text out loud, and the CodeGen API to generate code from natural language queries.

huggingface api

The Hugging Face API is a powerful tool that allows you to interact with various models and datasets hosted on the Hugging Face Hub. It provides a simple and convenient way to access and utilize state-of-the-art machine learning models for a wide range of natural language processing (NLP) tasks.
The Hugging Face API offers several features and functionalities, including:
Inference API: The Inference API allows you to make predictions using pre-trained models hosted on the Hugging Face Hub. You can send requests to the API to perform tasks such as text generation, text classification, named entity recognition, and more.
Hosted Inference API: The Hosted Inference API provides support for third-party library models. This means that you can use models from other libraries, such as TensorFlow or PyTorch, with the Hugging Face API.
Hub API Endpoints: The Hub API Endpoints allow you to access information about specific models and datasets hosted on the Hugging Face Hub. You can retrieve model information, get model tags, and retrieve information about datasets.
To use the Hugging Face API, you will need to authenticate yourself by providing a personal access token in the Authorization header of your API requests.
This token can be obtained from your Hugging Face account.
It's important to note that the Hugging Face API offers different plans, including a free plan and enterprise plans with additional features and support.
The pricing and infrastructure details vary depending on the plan you choose.

done: Activity A: Create an account on HuggingFace
done: Let’s get our API TOKEN KEY. ​

image.png
image.png
megaphone

To get a HuggingFace API token key, you need to follow these steps:

Create an account or log in to HuggingFace: Visit the HuggingFace website and either register for a new account or log in to your existing account
.
Navigate to the Access Tokens section: Once you are logged in, go to your profile settings. In the settings, you will find a section for Access Tokens
.
Generate a new token: In the Access Tokens section, click on the "New token" button. You will be prompted to choose a name for your token. After naming your token, click on "Generate a token". It is recommended to keep the "Role" as read-only unless you need to write access
.
Copy your token: After generating the token, you can copy it to your clipboard by clicking the "Copy" button next to your newly-created token
.
Remember, your access token should be kept private. If you need to protect it in front-end applications, consider setting up a proxy server that stores the access token
. You can delete and refresh User Access Tokens by clicking on the "Manage" button
.
Once you have your token, you can use it to authenticate your applications or notebooks to Hugging Face services. The token can be used in place of a password to access the Hugging Face Hub with git or with basic authentication, passed as a bearer token when calling the Inference API, or used in the Hugging Face Python libraries
.
current work item → A simple lab to build an artificial intelligence model using the Hugging Face API:
Step 1: Set up your environment
Make sure you have Python installed on your computer.
Make a directory for your first HG AI MODEL
pip install --upgrade transformers (sudo or Run As ADMIN)
Install the Hugging Face library by running pip install transformers —user
image.png
Step 2: Choose a pre-trained model
megaphone

Revised version of Lab Code:


from transformers import BertTokenizer, BertModel access_token = "hf_..."
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', token=access_token) model = BertModel.from_pretrained("bert-base-uncased", token=access_token)
# # Load tokenizer and model # tokenizer = AutoTokenizer.from_pretrained('bert-base-unc', token=access_token) # model = AutoModel.from_pretrained('bert-base-uncased', token=access_token) text = "This is an example sentence." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)


Visit the Hugging Face Model Hub () and select a pre-trained model that best suits your needs. For example, you can choose the "bert-base-uncased" model. ​
image.png
Step 3: Load the pre-trained model
Start by importing the necessary libraries:
image.png
from transformers import AutoTokenizer, AutoModel

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('bert-base-unc')
model = AutoModel.from_pretrained('bert-base-uncased')
Step 4: Prepare your data
Prepare your training data, which should be in a format suitable for your specific task. For example, if you are building a sentiment classifier, your data should be labeled with positive or negative sentiments.
Step 5: Tokenize your data
Tokenize your text data using the tokenizer:
text = "This is an example sentence."
inputs tokenizer.encode_plus(text, add_special_tokens=True, return_tensors="pt")
Step 6: Interact with the model
Pass the tokenized inputs to the model to obtain the model's output:
outputs = model(input_ids=inputs['input_ids'], token_type_ids=inputs['token_type_ids'], attention_mask=inputs['attention_mask'])
Step 7: Interpret the model's output
The model's output will depend on the specific task. For example, if you are using a sentiment classifier, you can interpret the output using softmax obtain the probabilities of each sentiment class.
Step 8: Fine-tuning (optional)
If you have labeled data for your specific task, you can fine-tune the pre-trained model on your dataset to further improve its performance.
's it! This simple lab outlines the basic steps to build an artificial intelligence model using the Hugging Face API. Feel free to explore the Hugging Face documentation () for more advanced usage and customization options.

megaphone

Building a simple generative AI language model using Hugging Face's Transformers library is a straightforward task. In this guide, I'll walk you through the process of creating one.

### Prerequisites: 1. **Python**: Ensure you have Python (3.6 or later) installed. 2. **Hugging Face Transformers**: If you don't have it already, you'll need to install the `transformers` library.
### Step-by-step Guide:
1. **Installation**: If you don't have the required libraries, install them via `pip`: ```bash pip install transformers ```
2. **Authentication**: For certain actions, such as pushing models to the Hugging Face Model Hub, you will need to authenticate. While I won't cover pushing models in this example, it's still good to be aware of the authentication process.
Set up your Hugging Face credentials: - Sign up on the Hugging Face website and obtain your API token from the settings. - Use the API token in your code or terminal like so: ```bash export HUGGINGFACE_TOKEN=YOUR_API_TOKEN ``` Replace `YOUR_API_TOKEN` with your actual token.
3. **Write the Python Program**: Here's a simple program that loads a pretrained model and generates text.
```python from transformers import GPT2LMHeadModel, GPT2Tokenizer
def generate_text(prompt): # Load pre-trained model and tokenizer model_name = "gpt2-medium" # You can change this to other model names if you prefer model = GPT2LMHeadModel.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name)
# Encode input prompt text input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate text from the model output = model.generate(input_ids, max_length=150, num_return_sequences=1, no_repeat_ngram_size=2, early_stopping=True)
# Decode the generated text generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
if __name__ == "__main__": user_prompt = input("Enter a prompt: ") print(generate_text(user_prompt))
```
This program uses the `gpt2-medium` model for demonstration. You can replace it with other models like `gpt2`, `gpt2-large`, etc. The function `generate_text` takes a user-provided prompt and returns a generated continuation.
4. **Run the Program**:
After writing the program, save it (e.g., `generate_text.py`) and then run it: ```bash python generate_text.py ```
You can then provide a prompt and observe the generated continuation.
### Notes: - This is a basic way to get started with the Hugging Face's Transformers library. - The provided code uses the GPT-2 model. There are many other models available in Hugging Face's Model Hub that you can explore and use. - If you plan to generate text frequently or with larger models, consider using GPU acceleration. The Transformers library integrates seamlessly with PyTorch and TensorFlow, making it easy to leverage GPUs.


info

Lab V2


Huggingface's Transformers library makes it easy to use pretrained language models. However, instead of directly using a username/password, Huggingface primarily relies on API tokens for authentication.
Here's a simple guide to setting up a Python program using the Huggingface Transformers library to interact with the GPT-2 model:
### Prerequisites: 1. Install the necessary libraries. You can do this using pip:
```bash pip install transformers pip install torch ```
2. Obtain your Huggingface API token: - Sign up or log in on [Huggingface's website](https://huggingface.co/). - Navigate to your profile settings. - Find and copy your API token.
### Program:
```python import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Initialize the GPT-2 model and tokenizer model_name = "gpt2-medium" model = GPT2LMHeadModel.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name)
def generate_text(prompt, max_length=50, temperature=0.7): """Generates text given a prompt using GPT-2.
Parameters: - prompt (str): The starting text for generation. - max_length (int): The maximum length of the generated text. - temperature (float): Controls randomness. Lower is more deterministic.
Returns: - str: Generated text. """ input_ids = tokenizer.encode(prompt, return_tensors="pt") output = model.generate( input_ids, max_length=max_length, temperature=temperature, pad_token_id=tokenizer.eos_token_id )
generated_text = tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True) return generated_text
if __name__ == "__main__": prompt = input("Enter your prompt: ") print(generate_text(prompt))
```
This program uses the `gpt2-medium` model. When you run the script, you can provide a starting prompt, and the GPT-2 model will generate a continuation of the prompt.
Note: If you're making many requests or using larger models, it's recommended to keep track of resource usage, especially memory.

Do students need to pay to purchase API access for this program?

Users don't necessarily need to pay to access the models via Huggingface's Transformers library for local use. The above code loads a pretrained model directly from Huggingface's public model repository and doesn't require any API access token or payment.
However, here are some points to note:
Downloading Models: When a model is loaded using the from_pretrained method (as in the provided code), it's downloaded from the Huggingface's model hub. These downloads are free, but they might incur data charges based on the user's internet plan, especially for larger models like gpt2-large or gpt3-xlarge.
Huggingface Inference API: Huggingface does offer a separate inference API where you can send a request to their servers, and they'll return the model's output. This API might have costs associated with it, especially for high volume or frequent use.
Local Computation Resources: Running models, especially large ones, require a good amount of computational resources. So, while the code might be free, executing it requires a capable machine, and using a GPU can speed up the inference time considerably. Some users may use cloud-based GPU services, which do have associated costs.
Storing Models: Once a model is downloaded, it's cached locally. This means future calls to from_pretrained for the same model won't redownload the model, saving time and data. However, models can take up a significant amount of storage space.
For typical classroom or personal projects, directly using the Transformers library to download and run models should suffice without incurring additional costs beyond potential data charges for the initial download.

Ramp up program 1:
error

Before using the Huggingface's API, you'll need to have the `transformers` and `requests` libraries installed. Here's how to install them:

```bash pip install transformers requests ```
Now, let's create a simple Python script to make direct API calls to Huggingface for inference. This will bypass the local model loading and computation, directly utilizing Huggingface's cloud-based models.
1. **Obtain your Huggingface API token**: - Sign up or log in on [Huggingface's website](https://huggingface.co/). - Navigate to your profile settings. - Find and copy your API token.
2. **Python Script**:
```python import requests
def call_huggingface_api(prompt, model_name="gpt2-medium", token="YOUR_HUGGINGFACE_API_TOKEN"): """Makes an API call to Huggingface's model to generate text.
Parameters: - prompt (str): The starting text for generation. - model_name (str): The model to use for generation. - token (str): Your Huggingface API token.
Returns: - str: Generated text. """ API_URL = f"https://api-inference.huggingface.co/models/{model_name}" headers = { "Authorization": f"Bearer {token}" } data = { "inputs": prompt }
response = requests.post(API_URL, headers=headers, json=data) response.raise_for_status() output = response.json()
return output[0]['generated_text']
if __name__ == "__main__": prompt = input("Enter your prompt: ") print(call_huggingface_api(prompt))
```
Replace `"YOUR_HUGGINGFACE_API_TOKEN"` with the token you got from Huggingface's website.
This script allows a user to enter a prompt and then makes an API call to generate a continuation of the text using the specified model.
Remember: Huggingface's Inference API isn't entirely free. While they provide some free tier usage, after a certain limit, charges might apply. Ensure students are aware of this if they make frequent or large requests.

minus

Let's develop a Tic Tac Toe game where the player plays against an AI, driven by Huggingface's GPT model. The idea here isn't to make a perfect Tic Tac Toe player (GPT isn't optimized for such tasks) but to illustrate the process and interactivity.

1. Install the necessary libraries:
```bash pip install transformers requests ```
2. Python Script:
```python import requests
def call_huggingface_api(prompt, model_name="gpt2-medium", token="YOUR_HUGGINGFACE_API_TOKEN"): headers = { "Authorization": f"Bearer {token}" } data = { "inputs": prompt } API_URL = f"https://api-inference.huggingface.co/models/{model_name}" response = requests.post(API_URL, headers=headers, json=data) response.raise_for_status() output = response.json() return output[0]['generated_text']
def display_board(board): for row in board: print("|".join(row)) print("-" * 5)
def check_win(board): for row in board: if len(set(row)) == 1 and row[0] != ' ': return True for col in range(3): if board[0][col] == board[1][col] == board[2][col] and board[0][col] != ' ': return True
if board[0][0] == board[1][1] == board[2][2] and board[0][0] != ' ': return True
if board[0][2] == board[1][1] == board[2][0] and board[0][2] != ' ': return True
return False
def tic_tac_toe(): board = [[' ' for _ in range(3)] for _ in range(3)] player = 'X' for turn in range(9): display_board(board) if turn % 2 == 0: print("Your turn!") row, col = map(int, input("Enter row and column (0-2) separated by space: ").split()) board[row][col] = player else: print("AI's turn!") prompt = f"Play Tic Tac Toe as O. Here's the board:\n\n{board}\n\nWhere should O play?" ai_move = call_huggingface_api(prompt) try: row, col = map(int, ai_move.split()) board[row][col] = 'O' except ValueError: # If AI doesn't give a valid response, choose a random empty cell. empty_cells = [(i, j) for i in range(3) for j in range(3) if board[i][j] == ' '] row, col = random.choice(empty_cells) board[row][col] = 'O' if check_win(board): display_board(board) if turn % 2 == 0: print("You win!") else: print("AI wins!") return player = 'O' if player == 'X' else 'X' display_board(board) print("It's a draw!")
if __name__ == "__main__": tic_tac_toe() ```
Replace `"YOUR_HUGGINGFACE_API_TOKEN"` with your token.
This program defines a Tic Tac Toe game, where the player is 'X' and the AI is 'O'. It alternates turns between the player and the AI. For the AI's move, it sends the current board state to GPT-2 and asks for a move. If GPT-2 doesn't return a valid move (which is quite possible because it's a general-purpose model and not trained specifically for Tic Tac Toe), the program just selects a random empty cell for the AI's move.
Remember, this is a fun way to interact with a language model, but GPT-2 isn't optimized for playing Tic Tac Toe optimally. It's just a creative use of the model.


info

A trivia game where the player can challenge the AI on general knowledge sounds fun. In this lab, the player will be able to submit a trivia question and then see if the Huggingface-powered AI can answer it correctly.

1. Install the necessary libraries:
```bash pip install transformers requests ```
2. Python Script:
```python import requests
def call_huggingface_api(prompt, model_name="gpt2-medium", token="YOUR_HUGGINGFACE_API_TOKEN"): headers = { "Authorization": f"Bearer {token}" } data = { "inputs": prompt } API_URL = f"https://api-inference.huggingface.co/models/{model_name}" response = requests.post(API_URL, headers=headers, json=data) response.raise_for_status() output = response.json() return output[0]['generated_text']
def trivia_game(): score_player = 0 score_ai = 0 rounds = int(input("How many rounds of trivia do you want to play? "))
for i in range(rounds): print(f"\nRound {i+1}:\n{'-'*20}") question = input("Enter your trivia question: ") correct_answer = input("Enter the correct answer (please be precise): ")
ai_answer = call_huggingface_api(question) print(f"AI's Answer: {ai_answer}")
ai_correct = input("Is the AI's answer correct? (yes/no) ").lower() if ai_correct == "yes": score_ai += 1 else: score_player += 1
print(f"\nScores after Round {i+1}: You - {score_player}, AI - {score_ai}")
print("\nFinal Scores:") print(f"You: {score_player}") print(f"AI: {score_ai}") if score_player > score_ai: print("Congratulations! You win!") elif score_ai > score_player: print("Looks like the AI won this time!") else: print("It's a draw!")
if __name__ == "__main__": trivia_game() ```
Replace `"YOUR_HUGGINGFACE_API_TOKEN"` with your token.
In this trivia game:
- The player will decide how many rounds they want to play. - For each round: - The player submits a trivia question. - The player provides the correct answer. - The AI gives its answer to the question. - The player verifies if the AI's answer was correct. - Scores are tallied after each round. - At the end, the game declares a winner based on the scores.
Remember, the effectiveness of the AI's answers depends on the clarity of the question and the accuracy of the data it was trained on. Some very niche or very recent questions might be outside its knowledge range.

ok

Alright! This lab will turn the tables and let the Huggingface-powered AI quiz the player on AI Model Engineering topics. Please note that many questions here are based on the "Unified Model Engineering Process" by Professor Peter Sigurdson.


1. Install the necessary libraries:
```bash pip install transformers requests ```
2. Python Script:
```python import requests
def call_huggingface_api(prompt, model_name="gpt2-medium", token="YOUR_HUGGINGFACE_API_TOKEN"): headers = { "Authorization": f"Bearer {token}" } data = { "inputs": prompt } API_URL = f"https://api-inference.huggingface.co/models/{model_name}" response = requests.post(API_URL, headers=headers, json=data) response.raise_for_status() output = response.json() return output[0]['generated_text']
def ai_quiz_game(): score = 0 rounds = int(input("How many rounds of AI trivia do you want to play? "))
for i in range(rounds): print(f"\nRound {i+1}:\n{'-'*20}")
# AI crafts a question based on a prompt ai_question = call_huggingface_api("Ask a trivia question related to AI Model Engineering.") print(f"Question: {ai_question}")
player_answer = input("Your Answer: ")
# AI evaluates the player's answer ai_evaluation_prompt = f"Is this answer correct for the question '{ai_question}': {player_answer}?" ai_evaluation = call_huggingface_api(ai_evaluation_prompt) # A simple mechanism to decide if AI considers the answer correct or not. if "yes" in ai_evaluation.lower(): print("Correct!") score += 1 else: print("Incorrect according to the AI!") print(f"AI's Explanation: {ai_evaluation}")
print(f"\nYour total score: {score}/{rounds}")
if __name__ == "__main__": ai_quiz_game() ```
Replace `"YOUR_HUGGINGFACE_API_TOKEN"` with your token.
In this trivia game:
- The player decides how many rounds they wish to play. - For each round: - The AI poses a trivia question based on AI Model Engineering. - The player provides their answer. - The AI evaluates the player's answer and decides if it's correct. - The player's score is tallied up at the end.
Bear in mind that this game hinges on the AI's ability to both ask relevant questions and evaluate answers. Given that AI, especially the GPT models, doesn't have a perfect understanding of the nuances of every topic, there might be occasional inaccuracies. This makes the game a fun, playful competition rather than a rigorous quiz.


Phase 1
Here is a simple workflow lab to build a generative AI model using the Hugging Face API and Hugging Face Spaces in Python:
## Step 1: Set Up Your Environment First, you need to set up your Python environment. You can do this using a Python virtual environment or a Google Colab notebook. All the libraries that you'll need for this lab are available as Python packages[4].
## Step 2: Get Your API Token To use the Hugging Face API, you need to register or log in to Hugging Face and get a User Access or API token in your Hugging Face profile settings[10].
## Step 3: Choose Your Model Next, choose the model you want to use. You can select a model from the Hugging Face Model Hub. If you're unsure where to start, you can check the recommended models for each ML task available, or the Tasks overview[10].
## Step 4: Run Inference with API Requests You can run inference on your chosen model using the Hugging Face Inference API. The `InferenceClient` is a Python client that makes HTTP calls to the Hugging Face APIs[1].
## Step 5: Create a Hugging Face Space You can host your Python app directly using Docker Spaces. You can run your own Python + interface stack in Spaces by selecting Gradio as your SDK and serving a frontend on port 7680[3].
## Step 6: Handle Spaces Dependencies The default Spaces environment comes with several pre-installed dependencies. If you need other Python packages to run your app, add them to a `requirements.txt` file at the root of the repository. The Spaces runtime engine will create a custom environment on-the-fly[6].
## Step 7: Deploy Your Model Finally, you can deploy your model to production using the Hugging Face Inference Endpoints. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice[1].
This is a basic workflow for building a generative AI model using the Hugging Face API and Hugging Face Spaces. Depending on your specific needs, you may need to adjust or add steps.
Citations: [1] https://huggingface.co/docs/huggingface_hub/guides/inference [2] https://huggingface.co/blog/3d-assets [3] https://huggingface.co/docs/hub/spaces-sdks-python [4] https://huggingface.co/course/chapter0 [5] https://huggingface.co [6] https://huggingface.co/docs/hub/spaces-dependencies [7] https://huggingface.co/docs/huggingface_hub/package_reference/hf_api [8] https://huggingface.co/blog/document-ai [9] https://docs.argilla.io/en/latest/getting_started/installation/deployments/huggingface-spaces.html [10] https://huggingface.co/docs/api-inference/quicktour [11] https://www.turintech.ai/hugging-face-on-evoml-build-custom-generative-ai-models-top-use-cases-for-financial-services/ [12] https://youtube.com/watch?v=zluqrm5gnb4 [13] https://github.com/huggingface/hfapi [14] https://huggingface.co/docs/hub/model-card-landscape-analysis [15] https://pypi.org/project/spaces/ [16] https://youtube.com/watch?v=0RJfKJEXUDg [17] https://youtube.com/watch?v=If19gJKdURk [18] https://www.gradio.app/guides/using-hugging-face-integrations [19] https://youtube.com/watch?v=XMYlqm2Dq1w [20] https://aws.amazon.com/blogs/machine-learning/aws-and-hugging-face-collaborate-to-make-generative-ai-more-accessible-and-cost-efficient/ [21] https://www.docker.com/blog/build-machine-learning-apps-with-hugging-faces-docker-spaces/ [22] https://thedatascientist.com/getting-started-hugging-face-a-machine-learning-tutorial-python/ [23] https://youtube.com/watch?v=CV6UagCYo4c
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.