Share
Explore

s24 AML3304 Assignment Instructions: Building the Word Embedding for your AI MODEL

How to deliver this Assignment:
Work by yourself, or in teams of up to 4 people.

megaphone

Assignment

Task:
Create and train your own word embedding model using some text corpus.
Build a language model around your embeddings and train it.
Generate text using your trained model and submit the generated text along with the code.

Your primary tasking for this Assignment is:
Get it working: Do a professionally presented Presentation you could show to an Employer.
Do the research questions.
Take it further: Explore and experiment with additional applications.
Try various libraries and do performance comparisons.

Making the TRELLO Board:

Here is a sample link to the Instructor’s Trello Board:
What this video to see how to set up your Assignment Delivery infrastructure:

Making the TEXT FILE for your Assignment Team:

When done: Upload your Text file to:
Make a text file: Name it as teamname.txt
Into this text file, put: team members’ names, student ids, email addresses.
Include in the TEXT File your TRELLO Board Address:
Put all team members as members of the Trello Board: And put as a editing member of the board.
Everything else will go into TRELLO:
Edit Link to Google Collab Notebook. (GCN : add as an editing member)
In your Trello Board: you will make one swimlane which one card per each team member:
I will present some research questions for you to present research answers on: This will be presented in your TRELLO Board.
megaphone

1 Text file, 1 Trello Board, 1 GCN for the entire team.

ONE team member to be the Team Librarian and do this for the team.

Research Questions: Research and present what you learned. This is part of your Assignment:

Present the Answers in your TRELLO Board.
megaphone

5 research questions related to AI model building that students can explore as part of their assignment:

Exploring the Impact of Different Word Embedding Techniques on Model Performance
Research various word embedding techniques such as:
Word2Vec
GloVe
FastText
BERT.
Compare and contrast their approaches, advantages, and limitations.
Investigate how the choice of embedding technique affects the performance of AI language models in different NLP tasks.
Catalog and describe what those various tasks are.

Evaluating the Effectiveness of Transfer Learning in AI Language Models
Study the concept of transfer learning in the context of AI language models.
Analyze how pre-trained models like BERT, GPT-3, and T5 can be fine-tuned for specific tasks.
Evaluate the benefits and potential challenges of using transfer learning compared to training models from scratch.

Investigating the Role of Hyperparameter Optimization in Enhancing Model Accuracy
Examine the importance of hyperparameter optimization in AI model building.
Research various hyperparameter tuning techniques such as grid search, random search, and Bayesian optimization.
Assess how different hyperparameters influence the accuracy and performance of language models.

Assessing the Challenges and Solutions in Training Large-Scale AI Models on Limited Resources
Explore the challenges associated with training large-scale AI models, particularly in terms of computational resources and memory constraints. Investigate techniques like model parallelism, gradient checkpointing, and distributed training. Provide case studies or examples of how these techniques have been successfully implemented to overcome resource limitations.

Analyzing the Ethical Implications and Biases in AI Language Models (dig out your work from our AI Guidelines study a few weeks ago)
Research the ethical considerations and potential biases present in AI language models. Study how biases can be introduced during data collection, preprocessing, and model training. Evaluate existing methods for detecting and mitigating biases in AI models. Discuss the broader societal implications of deploying biased AI systems and propose strategies for ensuring fairness and accountability.

Example Research Question Breakdown

1. Exploring the Impact of Different Word Embedding Techniques on Model Performance
Objective: Understand the differences between various word embedding techniques and their influence on AI model performance.
Key Points to Research:
Detailed explanation of Word2Vec, GloVe, FastText, and BERT.
Theoretical background and mathematical formulations of each technique.
Practical applications and specific use cases where each technique excels.
Comparative analysis of model performance using different embeddings on standard NLP tasks such as text classification, sentiment analysis, and named entity recognition.
Expected Outcome: A comprehensive report highlighting the strengths and weaknesses of each embedding technique, supported by experimental results or case studies.

Here is an outline for the work flow to deliver your Assignment:
- how to create word embeddings,
build an AI language model around them,
train the model on a corpus of text using Google Colab Notebook.
This outline includes a light introduction to the mathematics involved, explicative examples, and all necessary aspects to make the lab comprehensive and engaging.

Lab: Creating Word Embeddings and Building an AI Language Model in Google Colab

1. Introduction

Objective: Understand the concept of word embeddings,
create them using popular techniques, and
use these embeddings to build and train an AI language model.
Tools:
Google Colab,
Python,
TensorFlow/Keras,
NLTK,
Gensim.

info

Creating Word Embeddings and Building an AI Language Model in Google Colab

Part 1: Introduction
Welcome to the lab session where we will delve into the world of word embeddings and AI language models.
This lab is designed to give you a hands-on experience in creating word embeddings using popular techniques and utilizing these embeddings to build and train an AI language model.
We will use Google Colab for this lab to leverage its computational resources and ease of use.

Objective

By the end of this lab, you will be able to:
1. Understand the concept and importance of word embeddings.
2. Create word embeddings using Word2Vec.
3. Build a simple AI language model using the generated word embeddings.
4. Train the language model on a corpus of text.
We will be using the standard Guttenberg Corpus:

5. Evaluate and visualize the word embeddings.

Tools:


To achieve our objectives, we will use the following tools:
1. **Google Colab**: An online platform that provides free access to GPUs and TPUs, making it ideal for running machine learning tasks.
2. **Python**: The primary programming language for this lab.
3. **TensorFlow/Keras**: Popular libraries for building and training machine learning models.
4. **NLTK (Natural Language Toolkit)**: A library for working with human language data (text).
5. **Gensim**:
A library for topic modeling and document similarity analysis, used here for creating word embeddings.

Step-by-Step Guide

1. Setting Up Google Colab - Open Google Colab by navigating to [colab.research.google.com](https://colab.research.google.com). - Create a new notebook by clicking on **File > New Notebook**.

2. Installing Necessary Libraries In your Google Colab notebook, you need to install the required libraries. Run the following commands to install `nltk`, `gensim`, and `tensorflow`:
```python !pip install nltk gensim tensorflow ```

3. Importing Libraries

Import the libraries that you will be using throughout the lab:
```python import nltk import gensim import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Embedding, LSTM, Dense from nltk.corpus import gutenberg from nltk.tokenize import word_tokenize from nltk.corpus import stopwords import numpy as np import matplotlib.pyplot as plt from sklearn.manifold import TSNE ```

4. Downloading and Preparing the Data

We will use a text corpus from the NLTK library for this lab.
Let's download and prepare the data:
```python nltk.download('gutenberg') nltk.download('punkt') nltk.download('stopwords')
# Load the corpus corpus = gutenberg.raw('austen-emma.txt')
# Tokenize the text words = word_tokenize(corpus.lower())
# Remove stopwords and non-alphanumeric tokens stop_words = set(stopwords.words('english')) words = [word for word in words if word.isalnum() and word not in stop_words] ```

5. Creating Word Embeddings Using Word2Vec

We will use the Word2Vec model from the Gensim library to create word embeddings:
```python # Create Word2Vec model word2vec_model = gensim.models.Word2Vec(sentences=[words], vector_size=100, window=5, min_count=1, workers=4)
# Save the model for later use word2vec_model.save("word2vec.model") ```
6. Visualizing Word Embeddings To better understand the word embeddings, we will visualize them using t-SNE:
```python def plot_embeddings(model): labels = [] tokens = []
for word in model.wv.index_to_key: tokens.append(model.wv[word]) labels.append(word)
tsne_model = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23) new_values = tsne_model.fit_transform(tokens)
x = [] y = [] for value in new_values: x.append(value[0]) y.append(value[1])
plt.figure(figsize=(16, 16)) for i in range(len(x)): plt.scatter(x[i], y[i]) plt.annotate(labels[i], xy=(x[i], y[i]), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.show()
plot_embeddings(word2vec_model) ```

Fix the error in the above code:

Here are the fixes to the code so it runs properly:
1. Import necessary modules from gensim. 2. Initialize the Word2Vec model correctly. 3. Ensure the Word2Vec model is trained on the tokenized words. 4. Pass the correct model to the `plot_embeddings` function.
Here's the corrected code:
import nltk import gensim import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Embedding, LSTM, Dense from nltk.corpus import gutenberg from nltk.tokenize import word_tokenize from nltk.corpus import stopwords import numpy as np import matplotlib.pyplot as plt from sklearn.manifold import TSNE
# Download necessary NLTK data nltk.download('gutenberg') nltk.download('punkt') nltk.download('stopwords')
# Load the corpus corpus = gutenberg.raw('austen-emma.txt')
# Tokenize the text words = word_tokenize(corpus.lower())
# Remove stopwords and non-alphanumeric tokens stop_words = set(stopwords.words('english')) words = [word for word in words if word.isalnum() and word not in stop_words]
# Train a Word2Vec model word2vec_model = gensim.models.Word2Vec(sentences=[words], vector_size=100, window=5, min_count=1, workers=4)
def plot_embeddings(model): labels = [] tokens = []
for word in model.wv.index_to_key: tokens.append(model.wv[word]) labels.append(word)
tokens = np.array(tokens) # Convert to NumPy array tsne_model = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23) new_values = tsne_model.fit_transform(tokens)
x = [] y = [] for value in new_values: x.append(value[0]) y.append(value[1])
plt.figure(figsize=(16, 16)) for i in range(len(x)): plt.scatter(x[i], y[i]) plt.annotate(labels[i], xy=(x[i], y[i]), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.show()
# Plot the embeddings plot_embeddings(word2vec_model) ```
This code ensures the Word2Vec model is properly trained and the embeddings are visualized using t-SNE.***
Yeah! Congratulations! You have now made an embedding and used a PYTHON plot visualization to see what it looks like!
image.png

Here is the completed GCN:

Explanation of Key Concepts

Word Embeddings

Word embeddings are dense vector representations of words in a continuous vector space where words that have similar meanings are positioned close to each other. This representation captures the semantic meaning of words, enabling machine learning models to understand and process text more effectively.
#### **Mathematical Background** 1. **Vectors**: Words are represented as vectors in a high-dimensional space. 2. **Dot Product**: The similarity between two words can be measured using the dot product of their vectors. A higher dot product indicates higher similarity. 3. **Optimization**: Word embeddings are typically learned by optimizing a loss function that aims to position similar words closer together in the vector space.
#### **Word2Vec** Word2Vec is a popular algorithm for generating word embeddings. It comes in two flavors: - **Skip-gram**: Given a word, predict the surrounding context words. - **CBOW (Continuous Bag of Words)**: Given a context, predict the target word.
In this lab, we use the skip-gram model to generate embeddings, as it generally provides better representations for smaller datasets.
---
This concludes the introduction part of the lab. In the next parts, we will build the AI language model using the generated word embeddings, train it on the prepared corpus, and evaluate its performance.

2. Prerequisites

Basic knowledge of Python programming.
Understanding of basic linear algebra concepts (vectors and matrices).
error

Introduction to Linear Algebra Concepts: Vectors and Matrices

To effectively work with word embeddings and AI language models, it's essential to have a basic understanding of linear algebra concepts, specifically vectors and matrices. This introductory lesson will cover the fundamentals of these concepts, which are crucial for understanding and manipulating word embeddings.

1. Vectors

A vector is a mathematical object that has both magnitude and direction. In the context of natural language processing (NLP) and machine learning, vectors are used to represent words or other entities in a multi-dimensional space.
Notation: A vector is usually denoted by a bold lowercase letter (e.g., v) or with an arrow above the letter (e.g., v⃗\vec{v}v).
Components: A vector is composed of elements called components, which can be real numbers. For example, a 3-dimensional vector can be represented as v⃗=[v1,v2,v3]\vec{v} = [v_1, v_2, v_3]v=[v1​,v2​,v3​].

Example:

Consider a 2-dimensional vector representing a word:
w⃗=[2.5,3.0]\vec{w} = [2.5, 3.0]w=[2.5,3.0]
This vector has two components: 2.5 and 3.0.
Magnitude: The magnitude (or length) of a vector is a measure of its size and is calculated using the Euclidean norm. For a vector v⃗=[v1,v2,…,vn]\vec{v} = [v_1, v_2, \ldots, v_n]v=[v1​,v2​,…,vn​], the magnitude is given by:
∥v⃗∥=v12+v22+⋯+vn2\|\vec{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}∥v∥=v12​+v22​+⋯+vn2​​

Example:

For w⃗=[2.5,3.0]\vec{w} = [2.5, 3.0]w=[2.5,3.0],
∥w⃗∥=2.52+3.02=6.25+9=15.25≈3.91\|\vec{w}\| = \sqrt{2.5^2 + 3.0^2} = \sqrt{6.25 + 9} = \sqrt{15.25} \approx 3.91∥w∥=2.52+3.02​=6.25+9​=15.25​≈3.91
Dot Product: The dot product of two vectors a⃗\vec{a}a and b⃗\vec{b}b is a scalar value that measures their similarity. It is calculated as:
a⃗⋅b⃗=a1b1+a2b2+⋯+anbn\vec{a} \cdot \vec{b} = a_1b_1 + a_2b_2 + \cdots + a_nb_na⋅b=a1​b1​+a2​b2​+⋯+an​bn​

Example:

For a⃗=[1,2]\vec{a} = [1, 2]a=[1,2] and b⃗=[3,4]\vec{b} = [3, 4]b=[3,4],
a⃗⋅b⃗=1⋅3+2⋅4=3+8=11\vec{a} \cdot \vec{b} = 1 \cdot 3 + 2 \cdot 4 = 3 + 8 = 11a⋅b=1⋅3+2⋅4=3+8=11

2. Matrices

A matrix is a rectangular array of numbers arranged in rows and columns. Matrices are used extensively in machine learning for various operations, including transformations and representing datasets.
Notation: A matrix is usually denoted by a bold uppercase letter (e.g., A).
Elements: The elements of a matrix are denoted by aija_{ij}aij​, where iii is the row index and jjj is the column index.

Example:

Consider a 2x3 matrix:
A=[123456]\mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}A=[14​25​36​]
Matrix Addition: Matrices of the same dimensions can be added element-wise. For matrices A\mathbf{A}A and B\mathbf{B}B,
C=A+B  ⟹  cij=aij+bij\mathbf{C} = \mathbf{A} + \mathbf{B} \implies c_{ij} = a_{ij} + b_{ij}C=A+B⟹cij​=aij​+bij​

Example:

A=[123456],B=[789101112]\mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 7 & 8 & 9 \\ 10 & 11 & 12 \end{bmatrix}A=[14​25​36​],B=[710​811​912​] C=A+B=[1+72+83+94+105+116+12]=[81012141618]\mathbf{C} = \mathbf{A} + \mathbf{B} = \begin{bmatrix} 1+7 & 2+8 & 3+9 \\ 4+10 & 5+11 & 6+12 \end{bmatrix} = \begin{bmatrix} 8 & 10 & 12 \\ 14 & 16 & 18 \end{bmatrix}C=A+B=[1+74+10​2+85+11​3+96+12​]=[814​1016​1218​]
Matrix Multiplication: Matrix multiplication involves the dot product of rows and columns. For matrices A\mathbf{A}A (of dimensions m×nm \times nm×n) and B\mathbf{B}B (of dimensions n×pn \times pn×p),
C=AB  ⟹  cij=∑k=1naikbkj\mathbf{C} = \mathbf{A} \mathbf{B} \implies c_{ij} = \sum_{k=1}^n a_{ik} b_{kj}C=AB⟹cij​=k=1∑n​aik​bkj​

Example:

A=[1234],B=[5678]\mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}A=[13​24​],B=[57​68​] C=AB=[1⋅5+2⋅71⋅6+2⋅83⋅5+4⋅73⋅6+4⋅8]=[19224350]\mathbf{C} = \mathbf{A} \mathbf{B} = \begin{bmatrix} 1\cdot5 + 2\cdot7 & 1\cdot6 + 2\cdot8 \\ 3\cdot5 + 4\cdot7 & 3\cdot6 + 4\cdot8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}C=AB=[1⋅5+2⋅73⋅5+4⋅7​1⋅6+2⋅83⋅6+4⋅8​]=[1943​2250​]

3. Application in Word Embeddings

Word Embeddings: Word embeddings represent words as vectors in a high-dimensional space. These vectors are trained to capture semantic relationships between words.
Embedding Matrix: In neural networks, an embedding layer is a matrix where each row corresponds to a word vector. This matrix is learned during the training process.

Example:

If we have a vocabulary of 10,000 words and we want to represent each word with a 100-dimensional vector, our embedding matrix E\mathbf{E}E would be of size 10,000×10010,000 \times 10010,000×100.
Dot Product for Similarity: The dot product of two word vectors indicates their similarity. Words with similar meanings will have vectors with a higher dot product.
By understanding these basic concepts of vectors and matrices, you will be better equipped to grasp how word embeddings work and how they are used in AI language models. This foundational knowledge will also help you understand the various mathematical operations involved in training and evaluating these models.

Familiarity with neural networks and machine learning concepts.

3. Overview of Word Embeddings

Definition: Word embeddings are dense vector representations of words that capture semantic meaning.
Mathematical Background:
Vectors: Represent words as vectors in a high-dimensional space.
Dot Product: Measures similarity between word vectors.
Loss Functions: Used to optimize the embeddings.

4. Setting Up the Environment

Google Colab: Introduction and setup.
Installing Necessary Libraries:
python
Copy code
!pip install nltk gensim tensorflow

5. Data Preparation

Loading the Corpus:
Example: Using NLTK to load a text corpus.
python
Copy code
import nltk
nltk.download('gutenberg')
from nltk.corpus import gutenberg
corpus = gutenberg.raw('austen-emma.txt')

Text Preprocessing:
Tokenization, lowercasing, removing stop words, etc.
python
Copy code
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
nltk.download('punkt')
nltk.download('stopwords')

stop_words = set(stopwords.words('english'))
words = word_tokenize(corpus.lower())
words = [word for word in words if word.isalnum() and word not in stop_words]

6. Creating Word Embeddings

Using Word2Vec:
Introduction to Word2Vec and its skip-gram and CBOW models.
Training Word2Vec model using Gensim.
python
Copy code
from gensim.models import Word2Vec

word2vec_model = Word2Vec(sentences=[words], vector_size=100, window=5, min_count=1, workers=4)
word2vec_model.save("word2vec.model")

7. Visualizing Word Embeddings

Using t-SNE for Visualization:

from sklearn.manifold import TSNE
import matplotlib.pyplot as plt

def plot_embeddings(model):
labels = []
tokens = []

for word in model.wv.index_to_key:
tokens.append(model.wv[word])
labels.append(word)
tsne_model = TSNE(perplexity=40, n_components=2, init='pca', n_iter=2500, random_state=23)
new_values = tsne_model.fit_transform(tokens)
x = []
y = []
for value in new_values:
x.append(value[0])
y.append(value[1])
plt.figure(figsize=(16, 16))
for i in range(len(x)):
plt.scatter(x[i], y[i])
plt.annotate(labels[i],
xy=(x[i], y[i]),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.show()

plot_embeddings(word2vec_model)

8. Building the AI Language Model

Introduction to Sequential Models:
Explanation of RNNs, LSTMs, and their importance in language modeling.
Building the Model using TensorFlow/Keras:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense

model = Sequential()
model.add(Embedding(input_dim=len(word2vec_model.wv), output_dim=100, weights=[word2vec_model.wv.vectors], trainable=False))
model.add(LSTM(128, return_sequences=True))
model.add(LSTM(128))
model.add(Dense(len(word2vec_model.wv), activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

9. Training the Language Model

Preparing Data for Training:
Creating input-output pairs from the preprocessed text.
python
Copy code
def create_sequences(token_list, step=1):
sequences = []
next_words = []
for i in range(0, len(token_list) - step, step):
sequences.append(token_list[i: i + step])
next_words.append(token_list[i + step])
return sequences, next_words

sequences, next_words = create_sequences(words, step=5)

Training the Model:
model.fit(sequences, next_words, epochs=20, batch_size=128)


The error you're encountering is due to a mismatch in the shapes of the target (`next_words`) and the output of the model.
This is because `next_words` is a single integer representing the index of the next word, while the model's output is a one-hot encoded vector of probabilities for each word in the vocabulary.
To fix this, you need to: 1. One-hot encode the `next_words`. 2. Ensure the input sequences are padded to the same length.
Here's the corrected code:
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.