Share
Explore

Hugging Face Spaces

Category #AIApplicationDevelopment
megaphone

The Back Story:

Transformer: Important to understand because Transformers are the Engine of the AI Model operation
What a Transformer is in the context of ChatGPT and Hugging Face's API:
Today we're going to talk about one of the most important components of ChatGPT and Hugging Face's API: the Transformer.
So, what is a Transformer? Simply put, it's a type of neural network architecture that's particularly well-suited for natural language processing tasks.
But why is it so important for ChatGPT and Hugging Face's API?
Well, let's dive into that.First, a little background. Natural language processing (NLP) is a field of computer science that deals with how computers interact with human language.
Natural Language Process is a really hard problem because human language is incredibly complex and nuanced.
We want our Language Models to be context sensitive (“nuanced”) and display emotional empathy with the human conversational partner.
For example, just think about all the different ways you can say "I love you" - there's the plain old "I love you," but then there's also "I totally adore you,"
"You're the best thing since sliced bread," and countless other variations.
This complexity makes it really difficult for computers to understand and process human language.
Traditional rule-based approaches to NLP rely on hand-coded rules to try to capture all the different ways humans communicate, but these rules quickly become overwhelmed by the sheer number of possible combinations.
That's where deep learning comes in. Deep learning models are trained on vast amounts of data to learn patterns and relationships that would be impossible to capture with hand-coded rules.
And within deep learning, the Transformer is a particularly powerful architecture for NLP tasks.
The Transformer was introduced in a research paper by Vaswani et al. in 2017 and has since become one of the most widely used models architectures in NLP.
So what makes it so special?
Well, first of all, the Transformer doesn't use any recurrence or convolution.
Some topics we will develop for the project include the operation of ANNs and GANs.
Artificial Neural Networks.
CNN: Convolutional Neural Networks.
GANs: Adversial Networks: In way of doing things: You have 2 AI agents: Remember the demonstration of Susan’s Perfect Birthday Party:
https://chat.openai.com/c/8f3d3d37-5a72-4df6-a537-0d5d23dd0ed8
That might sound a bit technical, but basically, it means that the model processes input sequences of tokens (e.g. words or characters) in parallel, rather than sequentially.
This allows it to handle [Long Conversational memories] long-range dependencies much more effectively than previous architectures.
In other words, the Transformer can "see" the entire input sequence at once, rather than having to process it one step at a time: extends the range and span of the conversational memory.
This allows it to capture complex contextual relationships between tokens, which is essential for tasks like machine translation, question answering, and of course, chatbots!
So how does this relate to ChatGPT and Hugging Face's API?
Well, both of them rely heavily on Transformers to power their natural language processing capabilities.
In fact, ChatGPT uses a variant of the Transformer called the BERT (Bidirectional Encoder Representations from Transformers) architecture to generate text.
BERT is a pre-trained Transformer model that's been fine-tuned on a massive corpus of text data to learn high-level semantic (=meaning) and syntactic (syntax means proper grammar formulation) features of language.
When you ask ChatGPT a question or give it a prompt, it uses the BERT transformer to generate a response that's not just a random collection of words, but actually makes sense (it context nucanced) in the context of the conversation.
Similarly, Hugging Face's API makes available to us a variety of Transformer-based models to provide a range of NLP services, including text classification, sentiment analysis, named entity recognition, and more.
HF’s models are also pre-trained on large datasets and can be fine-tuned (hyper parameter optimization) for specific tasks to achieve state-of-the-art results.
2 things we will get from the HuggingSpaces Lab APIs:
n Access to Language Models (Chat GPT, Claude by Anthropic, Baby Lllama)
n Access to transformers
Hugging Face Spaces, often referred to as HuggingSpace, is a platform where artificial intelligence developers or enthusiasts can create, host, and share their machine learning applications with ease. It is part of the Hugging Face ecosystem, a well-recognized entity in the AI community for hosting models and datasets.
The primary use-cases of Hugging Face Spaces include:
Building Portfolios: You can display your work, be it in Natural Language Processing, Computer Vision, or Audio models by creating interactive demos on this platform.
Team Collaboration: With control versioning and git-based workflows, your team can collaborate to create applications showcasing your models.
Showcasing Work: You can make an interactive demo of your models for others to try out. This is an effective way of showcasing your models to the broader online community.
Moreover, there is no restriction on how many spaces one can create on the platform, be it Streamlit, Gradio, or Static apps. Also, there's a provision for secret management, which prevents you from hardcoding tokens, keys, etc., in your app. The environment for running these apps is currently limited to 16GB RAM and 8 CPU cores, but one can upgrade the Space to use a GPU.
In terms of libraries, you can use popular Python libraries like Streamlit and Gradio to create powerful python ML apps pertaining to data visualization, model demos, etc.
Moreover, the Hugging Face community has created over 6.5k spaces that one can explore for ideas, learning, or collaboration.
You also get official documentation support at for any help you might require.
Hugging Face Spaces is a platform that allows users to host machine learning (ML) demo apps directly on their profile or their organization's profile. This feature enables users to create an ML portfolio, showcase projects at conferences or to stakeholders, and work collaboratively with others in the ML ecosystem
.
Spaces are configured through a YAML block at the top of the README.md file at the root of the repository. This configuration includes parameters such as the title of the Space, whether the Space stays on top of your profile, whether a connected OAuth app is associated with this Space, and whether the Space iframe can be embedded in other websites
.
Hugging Face Spaces supports several SDKs including Streamlit, Gradio, Docker, and static HTML. This allows users to build cool apps in Python in a matter of minutes. Users can also unlock the whole power of Docker and host an arbitrary Dockerfile
.
Spaces are Git repositories, meaning that you can work on your Space incrementally (and collaboratively) by pushing commits. Each time a new commit is pushed, the Space will automatically rebuild and restart
.
Hugging Face Spaces also provides the ability to embed your Space in another website. This allows your audience to interact with your work and demonstrations without requiring any setup on their side
.
Hugging Face also provides open API endpoints that you can use to retrieve information from the Hub as well as perform certain actions such as creating model, dataset, or Space repos
.
In terms of resources, each Spaces environment is limited to 16GB RAM, 2 CPU cores, and 50GB of (not persistent) disk space by default, which you can use free of charge. You can upgrade to better hardware, including a variety of GPU accelerators and persistent storage, for a fee
.
As for the term "labs", it could refer to the profiles of various organizations or research labs on Hugging Face, such as CS Lab
or Hooshvare Research Lab
. These labs use Hugging Face for their research and development activities in the field of AI and machine learning.

Activity A: Creating a Space on Hugging Face involves several steps:

1. **Create an Account**: To create a Space, you first need to have a Hugging Face account. You can sign up for free on the Hugging Face website:
image.png
2. **Create a New Space**: Once you have an account, visit the Spaces main page and click on "Create new Space"[1]. You will be prompted to choose a name for your Space, select an optional license, and set your Space’s visibility[1].
image.png
3. **Choose the SDK**: During the creation process, you will be asked to choose the Software Development Kit (SDK) for your Space. Hugging Face offers four SDK options: Gradio, Streamlit, Docker, and static HTML[1]. The SDK you choose will determine the structure and capabilities of your Space. For example, if you choose Streamlit as your SDK, your Space will be initialized with the latest version of Streamlit[2]. If you choose Docker, your Space will accommodate custom Docker containers for apps outside the scope of Streamlit and Gradio[6]. If you choose static HTML, you can place your HTML code within an index.html file[14].
image.png
image.png
4. **Configure Your Space**: After creating your Space, you can configure its appearance and other settings inside the YAML {’Yet Another Markup Language’ ; it is an XML-based grammer and schema for specificating build deployments}; The YAML text configuration file is a build deployment block at the top of the README.md file at the root of the repository[3]. This includes parameters such as the title of the Space, whether the Space stays on top of your profile, whether a connected OAuth app is associated with this Space, and whether the Space iframe can be embedded in other websites[3].
5. **Add Files to Your Space**: Spaces are Git repositories, meaning that you can work on your Space incrementally (and collaboratively) by pushing commits. Each time a new commit is pushed, the Space will automatically rebuild and restart[1]. You can follow the same flow as in "Getting Started with Repositories" to add files to your Space[1].
6. **Manage Your Space**: You can manage your Space runtime (secrets, hardware, and storage) using huggingface_hub. This includes configuring secrets and hardware, upgrading the hardware to run it on GPUs, and setting a timeout for your Space[18].
Remember, each Spaces environment is limited to 16GB RAM, 2 CPU cores, and 50GB of (not persistent) disk space by default, which you can use free of charge. You can upgrade to better hardware, including a variety of GPU accelerators and persistent storage, for a fee[1].
For step-by-step tutorials to creating your first Space, you can refer to the guides provided by Hugging Face[1].
Citations: [1] https://huggingface.co/docs/hub/spaces-overview [2] https://huggingface.co/docs/hub/spaces-sdks-streamlit [3] https://huggingface.co/docs/hub/spaces-settings [4] https://drlee.io/huggingface-spaces-a-beginners-guide-to-creating-your-first-space-for-data-science-935d79a4a37b [5] https://huggingface.co/docs/hub/spaces [6] https://huggingface.co/docs/hub/spaces-sdks-docker [7] https://huggingface.co/docs/hub/spaces-github-actions [8] https://youtube.com/watch?v=2FZBWX78MTc [9] https://huggingface.co/docs/hub/spaces-more-ways-to-create [10] https://huggingface.co/docs/hub/spaces-config-reference [11] https://huggingface.co/docs/transformers/main_classes/configuration [12] https://www.tanishq.ai/blog/gradio_hf_spaces_tutorial/ [13] https://huggingface.co/spaces [14] https://huggingface.co/docs/hub/spaces-sdks-static [15] https://huggingface.co/spaces/tddschn/yaml-parser [16] https://docs.argilla.io/en/latest/getting_started/installation/deployments/huggingface-spaces.html [17] https://discuss.huggingface.co/t/how-to-use-a-privately-hosted-model-for-a-space/16366 [18] https://huggingface.co/docs/huggingface_hub/guides/manage-spaces [19] https://huggingface.co/spaces/launch [20] https://huggingface.co/docs/hub/spaces-sdks-gradio
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.