Final Project Work

icon picker
Literature Review

Last edited 179 days ago by Eddie Coda
image.png

Instructions From

This is a short paper (≈6 page) summarizing and synthesizing several papers in the area of your final project. As noted above, 8 pages is the maximum allowed length.
You are not required to use this template, but it is encouraged.
Groups of one should review 5 papers, groups of two should review 7 papers, and groups of three should review 9.
The ideal is to have the same topic for your lit review and final project, but it's possible that you'll discover in the lit review that your topic isn't ideal for you, so you can switch topics (or groups) for the final project; your lit review will be graded on its own terms. Major things to include (the italicized phrases make good section headings):
General problem/task definition: What are these papers trying to solve, and why?
Concise summaries of the articles: Do not simply copy the article text in full. We can read them ourselves. Put in your own words the major contributions of each article.
Compare and contrast: Point out the similarities and differences of the papers. Do they agree with each other? Are results seemingly in conflict? If the papers address different subtasks, how are they related? (If they are not related, then you may have made poor choices for a lit review...). This section is probably the most valuable for the final project, as it can become the basis for a lit review section..
Future work: Make several suggestions for how the work can be extended. Are there open questions to answer? This would presumably include how the papers relate to your final project idea.
References section: The entries should appear alphabetically and give at least full author name(s), year of publication, title, and outlet if applicable (e.g., journal name or proceedings name). Beyond that, we are not picky about the format. Electronic references are fine but need to include the above information in addition to the link.

Bert (Understanding the Basics)

Overview:

BERT is a model that inputs subtoken sequences of language and can perform various NLP tasks with little additional training due to its pre-training on a large corpus.
Unlike previous models that process text in a unidirectional way (left-to-right or right-to-left), BERT is designed to read in both directions simultaneously. This is a core part of its architecture, allowing each token to be embedded in the context of all tokens before and after it.
What Bert Wants to do is introduce bi-directional training

Model Comparison

OpenAI Transformer: Classic transformer model based on the "Attention Is All You Need" paper, which uses attention mechanisms.
ELMo: Utilizes bidirectional LSTMs to create word embeddings that account for context within the input sequence.
BERT: Improves upon the limitations of prior models by providing deep bidirectional training and understanding the context on both sides of each token simultaneously.

Attention Mechanism

BERT uses the Transformer's attention mechanism, where for each token in the input, the model calculates attention scores with every other token. This helps BERT to understand the context surrounding each word no matter where it appears in the sentence.
Description: The mechanism involves queries, keys, and values to determine relevance within the input sequence.
Function: Each token generates key and value vectors; the query vector is matched against all keys to focus on the most relevant parts of the sequence.

Masked Language Modeling (MLM)

During pre-training, BERT randomly masks some percentage of the input tokens. The model then predicts the masked words based solely on their context, which encourages deep bidirectional understanding.
This training requires the model to fill in the blanks within a sentence, thus learning a richer sense of context.
Method: Randomly masks tokens in the input and trains the model to predict them, helping BERT learn context from both sides of the masked token.

Next Sentence Prediction (NSP)

BERT learns to model relationships between sentences by predicting if a given sentence logically follows another. This is done by feeding pairs of sentences into the model and training it to predict if the second sentence is the subsequent sentence in the original document.
Task: Predicts if a sentence logically follows another, enabling the model to understand the relationship between consecutive sentences.
Training: Feeds pairs of sentences to the model and trains it to predict if the second sentence is a logical continuation of the first.

Embeddings Used in BERT

Token Embeddings: Convert words into a consistent vector space.
Segment Embeddings: Help the model distinguish between two different sentences in a single input stream.
Positional Embeddings: Inform the model about the position of tokens within the sequence.

Pretraining and Fine-tuning

BERT is pre-trained on a large corpus of text with the MLM and NSP tasks. Once pre-trained, BERT can be fine-tuned with additional output layers for a wide range of tasks, such as sentiment analysis or question answering.
For task-specific models, BERT's final hidden states are used as features for the task. An additional output layer is added, and the entire model (pre-trained layers plus the new output layer) is fine-tuned on the task-specific dataset.
Pretraining: The model uses MLM and NSP tasks on a large corpus to learn language understanding.
Fine-tuning: Adapts the pre-trained model to specific tasks by training on task-specific datasets.

BERT's Innovation

Contribution: BERT's bidirectionality allows it to contextually understand language better than previous models.
Impact: Achieves state-of-the-art results on multiple NLP tasks with the same pre-trained model.

Brendan’s Notes

Eddie’s Notes


Software Design Team

On Exponentially Faster Language modeling
Evelyn: The paper on UltraFastBERT is a goldmine for our project. It demonstrates how a BERT variant can use a fraction of its neurons during inference without sacrificing performance. The key innovation here is the replacement of traditional feedforward networks with fast feedforward networks (FFFs), which use a balanced binary tree structure, enabling conditional execution.
Marco: Yes, the efficiency aspect is striking. UltraFastBERT can operate with just 0.3% of its neurons during inference, achieving significant speedups. This is achieved through conditional matrix multiplication (CMM), where the computation path depends on the input, drastically reducing the number of operations needed.
Lena: From a data analysis perspective, the performance metrics are impressive. Despite using far fewer neurons, UltraFastBERT retains nearly the same level of performance on various NLP tasks compared to standard BERT models. This is a crucial insight for us, as it suggests that our kernel approximation methods might also achieve similar efficiency without compromising accuracy.
Evelyn: Another critical takeaway is the potential for even greater speedup with more optimized implementations. While their CPU and GPU implementations already show considerable improvements, the authors suggest that a native implementation of CMM could lead to even more significant gains.
Marco: Right, and this opens up a pathway for us with KerBert. We could explore similar conditional execution mechanisms within our kernel approximation approach, potentially leading to significant efficiency improvements.
Lena: We should also consider the trade-offs highlighted in the paper. For instance, while UltraFastBERT performs well on most GLUE benchmark tasks, there's a notable performance drop in some tasks like CoLA. It's important for us to balance efficiency with broad applicability across various NLP tasks.
Evelyn: Agreed. For our next steps, we should look into how the FFF and CMM concepts can be integrated into KerBert's design. We should also plan to conduct thorough performance evaluations to ensure that our efficiency gains do not come at the cost of accuracy or task versatility.
Marco: And from a software engineering perspective, we should be prepared for the challenges in implementing these conditional execution mechanisms efficiently, especially considering the current limitations in deep learning frameworks.
Lena: Definitely. While we're pushing for efficiency, we should also keep an eye on the model's performance across a range of tasks to ensure its practicality.
As for the next paper to review, I suggest we look at the "Transformer with Fourier Integral Attentions" paper. It could provide valuable insights into alternative attention mechanisms, which might complement our efforts to integrate kernel methods into BERT efficiently.
Evelyn: That sounds like a great plan, Lena. Understanding different attention mechanisms is key to optimizing our model's performance and efficiency.
Let us know if this aligns with your thoughts or if there's another direction you'd like us to
On Transformer with Fourier Integral Attentions:
The paper "Transformer with Fourier Integral Attentions" presents the FourierFormer, a new class of transformers that utilizes generalized Fourier integral kernels in place of the traditional dot-product kernels in transformers. This novel approach is designed to efficiently capture the dependencies and correlations between features in query and key vectors of self-attention mechanisms. The main contributions of the paper can be summarized as follows:
New Interpretation of Self-Attention: The paper reinterprets attention in transformers as a form of nonparametric kernel regression, addressing the limitations of traditional dot-product self-attention that might fail to capture correlations between features in the query and key vectors​​.
Fourier Integral Kernels: The authors develop generalized Fourier integral estimators for nonparametric regression, providing theoretical guarantees for these estimators. These kernels are capable of automatically capturing dependencies in data, removing the need to manually tune covariance matrices​​.
FourierFormer: This new transformer model, named FourierFormer, uses the generalized Fourier integral estimators to efficiently capture correlations between features in the query and key vectors. FourierFormer is shown to achieve better accuracy than baseline transformers with dot-product attention and reduce head redundancy in tasks like language modeling and image classification​​.
Empirical Validation: The paper presents empirical results demonstrating the superiority of FourierFormer over traditional dot-product transformers. In tasks such as language modeling on WikiText-103 and image classification on ImageNet, FourierFormer achieved significantly better performance, particularly in reducing perplexity and increasing accuracy. The improvements were more pronounced in larger model configurations, underscoring the model's ability to capture correlations more effectively in larger dimensional spaces​​.
Overall, this paper introduces a significant advancement in transformer technology by integrating Fourier integral theorems into the attention mechanism, thereby enhancing the model's ability to capture complex feature dependencies and improving its performance across various applications

General Problem/Task Definition
Introduction to BERT and Kernel Approximations: Begin with an introduction to the BERT model, its significance in NLP, and the motivation behind enhancing it with kernel approximations. Discuss the challenges of computational efficiency, scalability, and adaptability in the context of large-scale language models.
Need for Innovation: Highlight why current BERT models, despite their successes, require improvements, particularly in handling large datasets and complex tasks efficiently.
Concise Summaries of the Articles
BERT and Neural Efficiency: Summarize the foundational BERT paper, focusing on its architecture, the role of pre-training, and feature-based strategies. Discuss the challenges it faces in terms of computational efficiency and scalability. Also, summarize the insights from the paper on rectified linear units, emphasizing their role in addressing computational inefficiencies.
Fourier Integral in Transformers: Explain how the "Transformer with Fourier Integral Attentions" introduces Fourier integral kernels, enhancing the model's ability to capture complex feature dependencies. Highlight the empirical success of this approach in language modeling and image classification.
Switch Transformer for Scaling: Discuss the "Scaling Up the Switch Transformer Model", focusing on its novel approach to handle trillion-parameter models. Highlight its efficient use of resources and improved pre-training speed over traditional models.
UltraFastBERT for Efficiency: Analyze the UltraFastBERT model, emphasizing its approach to drastically reducing the number of neurons used during inference, thereby achieving significant computational efficiency.
Compare and Contrast
Comparing Approaches to Efficiency: Discuss how each paper proposes different strategies to address efficiency and scalability in transformer models. Compare the use of Fourier integral kernels, sparse activation in Switch Transformers, and neuron efficiency in UltraFastBERT.
Contrasting Perspectives: Point out any differences in their approaches, such as the focus on attention mechanisms versus overall architecture optimization. Discuss how these differences reflect varying priorities in model design.
Future Work
Extending Current Research: Propose how the insights from these papers can guide the development of KerBert. Suggest areas where further research is needed, such as integrating Fourier integral kernels with sparse activation techniques.
Open Questions and Challenges: Identify any unresolved challenges or questions that emerged from the literature review. Discuss how answering these could further enhance the efficiency and scalability of models like KerBert.
References
Provide a detailed list of all referenced papers, ensuring that the citation format is consistent and includes all necessary information.

Introduced a new “manager” role so I wouldn’t have to keep delegating the work out.

Fantastic! With a talented team like Evelyn, Marco, and Lena on board, we can efficiently tackle the "Concise Summaries of the Articles" section of the literature review for KerBERT. Here's how we can divide the tasks among the team:
1. Evelyn, AI Researcher:
Task: Focus on summarizing the deep learning and NLP-specific papers. Given your expertise in AI and NLP, you'd be best suited to handle the papers that delve into the technicalities of BERT, kernel methods, and any advanced AI techniques.
Summarize the foundational BERT paper and the paper on Fourier Integral Transformers.
Explain how these methods can be integrated into KerBERT and their potential impact on its performance.
2. Marco, Software Engineer:
Task: Concentrate on papers that discuss computational efficiency and algorithm optimization. Your skill set is ideal for understanding and summarizing papers that focus on improving algorithmic efficiency and computational resource utilization.
Summarize papers that deal with computational challenges, such as "UltraFastBERT" and "Tensor-Aware Energy Accounting."
Relate these findings to potential optimizations in KerBERT, particularly in terms of processing speed and resource usage.
3. Lena, Data Analyst:
Task: Your role will be to analyze and summarize papers that focus on the evaluation of models, especially those related to language quality and efficiency metrics.
Summarize the "FACE" paper, focusing on how its evaluation metrics can be applied to assess KerBERT.
Provide insights into how these metrics can guide the development and refinement of KerBERT, ensuring it aligns with human language patterns.
My Role as Coordinator:
I'll oversee the integration of these summaries, ensuring they flow cohesively and align with the overall structure of the literature review. I'll also assist in connecting the dots between the different papers, highlighting how each one contributes to the overarching goal of developing KerBERT.
Next Steps:
Evelyn, Marco, and Lena: Start working on your respective sections, summarizing the key points of each assigned paper.
Me: I'll compile and integrate your summaries into the literature review, ensuring a smooth and coherent narrative.
All: Once the drafts are ready, we'll review them together, refine the content, and ensure that the summaries not only reflect the individual papers' contributions but also how they collectively inform and support the development of KerBERT.

Models started performing badly, so I prompted two new agents based off of famous programers: G & L:





Abstracts:

KerBert-1.pdf
22.2 kB










Transformer Suggested Readings

Attention Is All You Need
Attention is all you need.pdf
2.2 MB
The Illustrated Transformer
Transformer (Google AI blog post)
Layer Normalization
Image Transformer
Music Transformer: Generating music with long-term structure

Pre-Training Suggested Readings

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Contextual Word Representations: A Contextual Introduction
The Illustrated BERT, ELMo, and co.
Martin & Jurafsky Chapter on Transfer Learning

Overleaf template to write and export the pdf to submit
Neural Models of Language Use
DEEP NLP links:
Public Wisdom Matters!.pdf
1.1 MB
Kernel-Predicting Convolutional Networks.pdf
113.7 MB


ML paper review:

Language Modeling is Compression



Attention Is All You Need
df








Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.