Difficulty A problems

Difficulty: A
14
Category
Difficulty
Existing Work
Currently working
Help Wanted?
Search
Category
Subcategory
Difficulty
Number
Problem
Existing Work
Currently working
Help Wanted?
1
Toy Language Models
Understanding neurons
A
1.6
Hunt through Neuroscope for the toy models and look for interesting neurons to focus on.
2
Toy Language Models
Understanding neurons
A
1.7
Can you find any polysemantic neurons in Neuroscope? Explore this.
3
Toy Language Models
A
1.23
Choose your own adventure: Take a bunch of text with interesting patterns and run the models over it. Look for tokens they do really well on and try to reverse engineer what's going on!
4
Circuits In The Wild
Circuits in natural language
A
2.13
Choose your own adventure! Try finding behaviours of your own related to natural language circuits.
5
Circuits In The Wild
Circuits in code models
A
2.17
Choose your own adventure! Look for interesting patterns in how the model behaves on code and try to reverse engineer something. Algorithmic flavored tasks should be easiest.
6
Circuits In The Wild
Extensions to IOI paper
A
2.18
Understand IOI in the Stanford mistral models. Does the same circuit arise? (You should be able to near exactly copy Redwood's code for this)
7
Circuits In The Wild
Extensions to IOI paper
A
2.19
Do earlier heads in the circuit (duplicate token, induction, S-inhibition) have backup style behaviour? If we ablate them, how much does this damage performance? Will other things compensate?
8
Circuits In The Wild
Extensions to IOI paper
A
2.21
Can we reverse engineer how duplicate token heads work deeply? In particular, how does the QK circuit know to look for copies of the current token without activating on non-duplicates since the current token is always a copy of itself?
9
Interpreting Algorithmic Problems
Beginner problems
A
3.1
Sorting fixed-length lists. (format - START 4 6 2 9 MID 2 4 6 9)
10
Interpreting Algorithmic Problems
Beginner problems
A
3.2
Sorting variable-length lists. (What's the sorting algorithm? What's the longest list you can get do? How does length affect accuracy?)
11
Interpreting Algorithmic Problems
Beginner problems
A
3.3
Interpret a 2L MLP (one hidden layer) trained to do modular addition. (Analogous to Neel's grokking work)
12
Interpreting Algorithmic Problems
Beginner problems
A
3.4
Interpret a 1L MLP trained to do modular subtraction (Analogous to Neel's grokking work)
13
Interpreting Algorithmic Problems
Beginner problems
A
3.5
Taking the minimum or maximum of two ints
14
Interpreting Algorithmic Problems
Beginner problems
A
3.6
Permuting lists
15
Interpreting Algorithmic Problems
Beginner problems
A
3.7
Calculating sequences with Fibonnaci-style recurrence (predicting next element from the previous two)
16
Interpreting Algorithmic Problems
Questions about language models
A
3.21
Train a 1L attention-only transformer with rotary to predict the previous token and reverse engineer how it does this.
5/7/23: Eric (repo: https://github.com/DKdekes/rotary-interp)
17
Interpreting Algorithmic Problems
Extending Othello-GPT
A
3.3
Try one of Neel's concrete Othello-GPT projects.
18
Exploring Polysemanticity and Superposition
Confusions to study in Toy Models of Superposition
A
4.1
Does dropout create a privileged basis? Put dropout on the hidden layer of the ReLU output model and study how this changes the results.
Post
14 April 2023: Kunvar (firstuserhere)
19
Exploring Polysemanticity and Superposition
Confusions to study in Toy Models of Superposition
A
4.5
Explore neuron superposition by training their absolute value model on functions of multiple variables. Make inputs binary (0/1) and look at the AND and OR of element pairs.
20
Exploring Polysemanticity and Superposition
Confusions to study in Toy Models of Superposition
A
4.7
Adapt their ReLU output model to have a different range of feature values, and see how this affects things. Make the features 1 (i.e, two possible values)
21
Exploring Polysemanticity and Superposition
Confusions to study in Toy Models of Superposition
A
4.1
What happens if you replace ReLU's with GeLU's in the toy models?
May 1, 2023 - Kunvar (firstuserhere)
22
Exploring Polysemanticity and Superposition
Studying bottleneck superposition in real language models
A
4.25
Can you find any examples of the geometric superposition configurations in the residual stream of a language model?
23
Exploring Polysemanticity and Superposition
Comparing SoLU/GELU
A
4.37
How do TransformerLens SoLU / GeLU models compare in Neuroscope under the SoLU polysemanticity metric? (What fraction of neurons seem monosemantic)
24
Analysing Training Dynamics
Understanding fine-tuning
A
5.16
How does model performance change on the original training distribution when finetuning?
25
Analysing Training Dynamics
Understanding training dynamics in language models
A
5.25
Look at attention heads on various texts and see if any have recognisable attention patterns, then analyse them over training.
26
Analysing Training Dynamics
Finding phase transitions
A
5.26
Look for phase transitions in the Indirect Object Identification task. (Note: This might not have a phase change)
27
Analysing Training Dynamics
Studying path dependence
A
5.33
How much do the Stanford CRFM models have similar outputs on a given text?
28
Analysing Training Dynamics
Studying path dependence
A
5.35
Look for Indirect Object Identification capability in other models of approximately the same size.
29
Analysing Training Dynamics
Studying path dependence
A
5.38
Can you find some problem where you understand the circuits and Git Re-Basin does work?
30
Techniques, Tooling, and Automation
Breaking current techniques
A
6.1
Try to find concrete edge cases where a technique breaks - start with a misleading example in a real model or training a toy model with one.
31
Techniques, Tooling, and Automation
Breaking current techniques
A
6.7
Find edge cases where ablations break. (Start w/ backup name movers in the IOI circuit, where we know zero ablations break)
32
Techniques, Tooling, and Automation
ROME activation patching
A
6.15
In the ROME paper, they do activation patching by patching over the outputs of 10 adjacent MLP or attention layers. (Look at logit difference after patching). How do results change when you do single layers?
33
Techniques, Tooling, and Automation
ROME activation patching
A
6.16
In the ROME paper, they do activation patching by patching over the outputs of 10 adjacent MLP or attention layers. (Look at logit difference after patching). Can you get anywhere when patching specific neurons?
34
Techniques, Tooling, and Automation
Automatically find circuits
A
6.18
Automate ways to find previous token heads. (Bonus: Add to TransformerLens!)
35
Techniques, Tooling, and Automation
Automatically find circuits
A
6.19
Automate ways to find duplicate token heads. (Bonus: Add to TransformerLens!)
36
Techniques, Tooling, and Automation
Automatically find circuits
A
6.2
Automate ways to find induction heads. (Bonus: Add to TransformerLens!)
37
Techniques, Tooling, and Automation
Automatically find circuits
A
6.21
Automate ways to find translation heads. (Bonus: Add to TransformerLens!)
38
Techniques, Tooling, and Automation
Refine max activating dataset examples
A
6.36
Using 6.28: Finding the minimal example to activate a neuron by truncating the text - how often does this work?
39
Techniques, Tooling, and Automation
Refine max activating dataset examples
A
6.37
Using 6.28: Can you replicate the results of the interpretability illusion for Neel's toy models by finding seemingly monosemantic neurons on Python code or C4 (web text), but are polysemantic when combined?
40
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.1
Explore random neurons! Use the interactive neuroscope to test and verify your understanding.
41
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.2
Look for interesting conceptual neurons in the middle layers of larger models, like the "numbers that refer to groups of people" neuron.
42
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.3
Look for examples of detokenisation neurons
43
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.4
Look for examples of trigram neurons (consistently activate on a pair of tokens and boost the logit of plausible next tokens)
44
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.5
Look for examples of retokenization neurons
45
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.6
Look for examples of context neurons (eg base64)
46
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.7
Look for neurons that align with any of the feature ideas in 9.13-9.21
47
Studying Learned Features in Language Models
Exploring Neuroscope
A
9.1
How much does the logit attribution of a neuron align with the dataset example patterns? Is it related?
48
Studying Learned Features in Language Models
Seeking out specific features
A
9.13
Basic syntax (Lots of ideas in post)
49
Studying Learned Features in Language Models
Seeking out specific features
A
9.14
Linguistic features (Try using spaCy to automate this) (Lots of ideas in post)
50
Studying Learned Features in Language Models
Seeking out specific features
A
9.15
Proper nouns (Lots of ideas in post)
51
Studying Learned Features in Language Models
Seeking out specific features
A
9.16
Python code features (Lots of ideas in post)
52
Studying Learned Features in Language Models
Seeking out specific features
A
9.2
LaTeX features. Try common commands (\left, \right) and section titles (\abstract, \introduction, etc.)
53
Studying Learned Features in Language Models
Seeking out specific features
A
9.23
Diambiguation neurons - Foreign language disambiguation (e.g, "die" in Dutch vs. German vs. Afrikaans)
54
Studying Learned Features in Language Models
Seeking out specific features
A
9.24
Disambiguation neurons - words with multiple meanings (e.g, "bat" as animal or sports equipment)
55
Studying Learned Features in Language Models
Seeking out specific features
A
9.25
Search for memory management neurons (high negative cosine similarity between w_in and w_out). What do their dataset examples look like? Is there a pattern?
56
Studying Learned Features in Language Models
Seeking out specific features
A
9.26
Search for signal boosting neurons (high positive cosine similarity between w_in and w_out). What do their dataset examples look like? Is there a pattern?
57
Studying Learned Features in Language Models
Seeking out specific features
A
9.28
Can you find split-token neurons? (I.e, " Claire" vs. "Cl" and "aire" - the model should learn to identify the split-token case)
58
Studying Learned Features in Language Models
Seeking out specific features
A
9.32
Neurons which link to attention heads - duplicated token
59
Studying Learned Features in Language Models
Curiosities about neurons
A
9.4
When you look at the max dataset examples for a specific neuron, is that neuron the most activated neuron on the text? What does it look like in general?
60
Studying Learned Features in Language Models
Curiosities about neurons
A
9.41
Look at the distributions of neuron activations (pre and post-activation for GELU, and pre, mid, and post for SoLU). What does this look like? How heavy tailed? How well can it be modelled as a normal distribution?
61
Studying Learned Features in Language Models
Curiosities about neurons
A
9.43
How similar are the distributions between SoLU and GELU?
62
Studying Learned Features in Language Models
Curiosities about neurons
A
9.44
What does the distribution of the LayerNorm scale and softmax denominator in SoLU look like? Is it bimodal (indicating monosemantic features) or fairly smooth and unimodal?
63
Studying Learned Features in Language Models
Curiosities about neurons
A
9.52
Try comparing how monosemantic the neurons in a GELU vs SoLU model are. Can you replicate the results SoLU does better? What are the rates for each model?
64
Studying Learned Features in Language Models
Miscellaneous
A
9.59
Can you replicate the results of the interpretability illusion on SoLU models, which were trained on a mix of web text and Python code? (Find neurons that seem monosemantic on either but with importantly different patterns)
No results from filter
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.