Difficulty C problems

Difficulty: C
14
Category
Difficulty
Search
Category
Subcategory
Difficulty
Number
Problem
Existing Work
Currently working
Help Wanted?
1
Toy Language Models
Understanding neurons
C
1.5
How far can you get deeply reverse engineering a neuron in a 2+ layer model?
2
Circuits In The Wild
Circuits in natural language
C
2.6
A harder version of 2.5 is constructing an email from a snippet, like Name: Jess Smith, Email: last name dot first name k @ gmail
3
Circuits In The Wild
Circuits in natural language
C
2.7
Interpret factual recall. Start with ROME's work with causal tracing, but how much more specific can you get? Heads? Neurons?
4
Circuits In The Wild
Circuits in natural language
C
2.1
Interpreting memorisation. Sometimes GPT knows phone numbers. How?
5
Circuits In The Wild
Circuits in code models
C
2.16
Methods depend on object type (e.g, x.append a list, x.update a dictionary)
6
Circuits In The Wild
Extensions to IOI paper
C
2.23
What is the role of Negative/Backup/regular Name Mover heads outside IOI? Are there examples where Negative Name Movers contribute positively?
7
Circuits In The Wild
Extensions to IOI paper
C
2.24
What are the conditions for the compensation mechanisms where ablating a name mover doesn't reduce performance much to occur? Is it due to dropout?
8
Circuits In The Wild
Extensions to IOI paper
C
2.27
MLP layers (beyond the first) seem to matter somewhat for the IOI task. What's up with this?
9
Circuits In The Wild
Extensions to IOI paper
C
2.28
Understanding what's happening in the adversarial examples, most notable S-Inhibition Head attention pattern (hard)
10
Circuits In The Wild
Studying larger models
C
2.34
GPT-J contains translation heads. Can you interpret how they work and what they do?
11
Circuits In The Wild
Studying larger models
C
2.35
Try to find and reverse engineer fancier induction heads like pattern matching heads - try GPT-J or GPT-NeoX.
12
Circuits In The Wild
Studying larger models
C
2.36
What's up with few-shot learning? How does it work?
13
Circuits In The Wild
Studying larger models
C
2.37
How does addition work? (Focus on 2-digit)
14
Circuits In The Wild
Studying larger models
C
2.38
What's up with Tim Dettmer's emergent features in the residual stream stuff? Do they map to anything interpretable? What if we do max activating dataset examples?
15
Interpreting Algorithmic Problems
Harder problems
C
3.14
Problems in In-Context Linear Regression that are in-context learned. See 3.13.
16
Interpreting Algorithmic Problems
Harder problems
C
3.15
5 digit (or binary) multiplication
17
Interpreting Algorithmic Problems
Harder problems
C
3.17
Choose your own adventure! Find your own algorithmic problem. Leetcode easy is probably a good source.
18
Interpreting Algorithmic Problems
C
3.19
Is 3.18 consistent across random seeds, or can other algorithms be learned? Can a 2L model learn this? What happens if you add more MLP's or more layers?
19
Interpreting Algorithmic Problems
C
3.2
Reverse-engineer Othello-GPT. Can you reverse-engineer the algorithms it learns, or the features the probes find?
20
Interpreting Algorithmic Problems
Questions about language models
C
3.24
How does memorisation work? Try training a one hidden layer MLP to memorise random data, or training a transformer on a fixed set of random strings of tokens.
21
Interpreting Algorithmic Problems
Questions about language models
C
3.25
Compare different dimensionality reduction techniques on modular addition or a problem you feel you understand.
22
Interpreting Algorithmic Problems
Questions about language models
C
3.27
Is direct logit attribution always useful? Can you find examples where it's highly misleading?
23
Interpreting Algorithmic Problems
Extending Othello-GPT
C
3.31
Looking for modular circuits - try to find the circuits used to compute the world model and to use the world model to compute the next move. Try to understand each in isolation and use this to understand how they fit together. See what you can learn about finding modular circuits in general.
24
Interpreting Algorithmic Problems
Extending Othello-GPT
C
3.33
Transformer Circuits Laboratory - Explore and test other conjectures about transformer circuits - e.g, can we figure out how the model manages memory in the residual stream?
25
Exploring Polysemanticity and Superposition
Confusions to study in Toy Models of Superposition
C
4.11
Can you find a toy model where GELU acts significantly differently from ReLU?
May 1, 2023 - Kunvar (firstuserhere)
26
Exploring Polysemanticity and Superposition
Building toy models of superposition
C
4.12
Build a toy model of a classification problem with cross-entropy loss
November 10, 2023 - Lucas Hayne ()
27
Exploring Polysemanticity and Superposition
Building toy models of superposition
C
4.13
Build a toy model of neuron superposition that has many more hidden features than output features
28
Exploring Polysemanticity and Superposition
Building toy models of superposition
C
4.14
Build a toy model that needs multiple hidden layers of ReLU's. Can computation in superposition happen across several layers? Eg max (|x|, |y|)
29
Exploring Polysemanticity and Superposition
Building toy models of superposition
C
4.15
Build a toy model of attention head superposition/polysemanticity. Can you find a task where the model wants to do different things with an attention head on different inputs? How does it represent things internally / deal with interference?
30
Exploring Polysemanticity and Superposition
Making toy model counterexamples
C
4.17
Make toy models that are counterexamples in MI. A learned example of a network with a non-linear representation.
31
Exploring Polysemanticity and Superposition
Making toy model counterexamples
C
4.18
Make toy models that are counterexamples in MI. A network without a discrete number of features.
32
Exploring Polysemanticity and Superposition
Making toy model counterexamples
C
4.19
Make toy models that are counterexamples in MI. A non-decomposable neural network.
33
Exploring Polysemanticity and Superposition
Making toy model counterexamples
C
4.2
Make toy models that are counterexamples in MI. A task where networks can learn multiple different sets of features.
34
Exploring Polysemanticity and Superposition
Studying bottleneck superposition in real language models
C
4.26
Can you find any examples of locally almost-orthogonal bases?
35
Exploring Polysemanticity and Superposition
Studying bottleneck superposition in real language models
C
4.27
Do language models have "genre" directions that detect the type of text, and then represent features specific to each genre in the same subspace?
36
Exploring Polysemanticity and Superposition
Studying neuron superposition in real models
C
4.3
Look at a polysemantic neuron in a 2L language model. Can you figure out how the model disambiguates what feature it is?
37
Exploring Polysemanticity and Superposition
Studying neuron superposition in real models
C
4.32
Try to fully reverse engineer a feature discovered in 4.31.
38
Exploring Polysemanticity and Superposition
Studying neuron superposition in real models
C
4.33
Can you use superposition to create an adversarial example for a neuron?
39
Exploring Polysemanticity and Superposition
Studying neuron superposition in real models
C
4.34
Can you find any examples of the asymmetric superposition motif in the MLP of a 1-2 layer language model?
40
Exploring Polysemanticity and Superposition
C
4.35
Pick a simple feature of language (e.g, is number, is base64) and train a linear probe to detect that in the MLP activations of a 1L language model.
41
Exploring Polysemanticity and Superposition
Comparing SoLU/GELU
C
4.41
How does GELU vs. ReLU compare re: polysemanticity. Replicate SoLU analysis.
42
Exploring Polysemanticity and Superposition
Getting rid of superposition
C
4.42
If you train a 1L/2L language model with d_mlp = 100 * d_model, does superposition go away?
43
Exploring Polysemanticity and Superposition
Getting rid of superposition
C
4.43
Study the T5 XXL. It's 11B params and not supported by TransformerLens. Expect major infrastructure pain.
44
Exploring Polysemanticity and Superposition
Getting rid of superposition
C
4.45
Pick an open problem at the end of Toy Models of Superposition.
45
Analysing Training Dynamics
Algorithmic tasks - understanding grokking
C
5.2
Why do 5-digit addition phase changes happen in that order?
46
Analysing Training Dynamics
Algorithmic tasks - understanding grokking
C
5.4
Can we predict when grokking will happen? Bonus: Without using any future information?
47
Analysing Training Dynamics
Algorithmic tasks - understanding grokking
C
5.5
Understanding why the model chooses specific frequencies (and why it switches mid-training sometimes!)
48
Analysing Training Dynamics
Algorithmic tasks - lottery tickets
C
5.1
All Neel's toy models (attn-only, gelu, solu) were trained with the same data shuffle and weight initialisation. Many induction heads aren't shared, but L2H3 in 3L and L1H6 in 2L always are. What's up with that?
49
Analysing Training Dynamics
Understanding fine-tuning
C
5.15
Build a toy model of fine-tuning (train on task 1, fine-tune on task 2). What is going on internally? Any interesting motifs?
50
Analysing Training Dynamics
Understanding fine-tuning
C
5.21
Can you find any phase transitions in the fine-tuning checkpoints?
51
Analysing Training Dynamics
Understanding training dynamics in language models
C
5.24
Use the per-token loss analysis technique from the induction heads paper to look for more phase changes.
52
Analysing Training Dynamics
Finding phase transitions
C
5.31
Look for phase transitions in benchmark performance or specific questions from a benchmark.
53
Techniques, Tooling, and Automation
Breaking current techniques
C
6.3
Can you fix direct logit attribution in GPT-Neo small, e.g, by finding a linear approximation to the final layer by taking gradients? (Eleuther's tuned lens in #interp-across-depth would be a good place to start)
54
Techniques, Tooling, and Automation
Breaking current techniques
C
6.6
Find edge cases where causal scrubbing breaks.
55
Techniques, Tooling, and Automation
Breaking current techniques
C
6.11
Automate ways to identify heads that compose. Start with IOI circuit and the composition scores in A Mathematical Framework.
56
Techniques, Tooling, and Automation
C
6.13
Can you automate direct path patching as used in the IOI paper?
57
Techniques, Tooling, and Automation
Automatically find circuits
C
6.27
Can you automate the detection of something in neuron interpretability? E.g, trigram neurons
58
Techniques, Tooling, and Automation
Automatically find circuits
C
6.28
Find good ways to find the equivalent of max activating dataset examples for attention heads. Validate on induction circuits, then IOI. See post for ideas.
59
Techniques, Tooling, and Automation
Refine max activating dataset examples
C
6.29
Refine the max activating dataset examples technique for neuron interpretability to find minimal or diverse examples.
60
Techniques, Tooling, and Automation
Refine max activating dataset examples
C
6.35
Using 6.28: (Infrastructure) Add any of 6.29-6.34 to Neuroscope. Email Neel (neelnanda27@gmail.com) for codebase access.
61
Techniques, Tooling, and Automation
Apply techniques from non-mechanistic interpretability
C
6.43
Can you use probing to get evidence for or against predictions in Toy Models of Superposition?
62
Techniques, Tooling, and Automation
Apply techniques from non-mechanistic interpretability
C
6.44
Pick anything interesting from Rauker et al and try to apply the techniques to circuits we understand.
63
Techniques, Tooling, and Automation
C
6.46
Take existing circuits and explore quantitative ways to characterise that it's a true circuit (or disprove it!) Try causal scrubbing to start.
64
Techniques, Tooling, and Automation
C
6.47
Build on Arthur Conmy's work to automatically find circuits via recursive path patching
65
Techniques, Tooling, and Automation
Taking the "diff" of two models
C
6.49
Build tooling to take the "diff" of two models, treating them as a black box mapping inputs to outputs, so it works with models with different internal structure
66
Techniques, Tooling, and Automation
C
6.59
We understand how attention is calculated for a head using the QK matrix. This doesn't work for rotary attention. Can you find a principled alternative?
67
Image Model Interpretability
Reverse engineering image models
C
7.1
Using Circuits techniques, how well can we reverse engineer ResNet?
68
Image Model Interpretability
Reverse engineering image models
C
7.2
Vision Transformers - can you smush together transformer circuits and image circuits techniques? Which ones transfer?
69
Image Model Interpretability
Reverse engineering image models
C
7.3
Using Circuits techniques, how well can we reverse engineer ConvNeXt, a modern image model architecture merging ResNet and vision transformer ideas?
70
Image Model Interpretability
Building on Circuits thread
C
7.4
How well can you hand-code curve detectors? Can you include color? How much performance can you recover?
71
Image Model Interpretability
Building on Circuits thread
C
7.5
Can you hand-code any other circuits? Start with other early vision neurons
72
Image Model Interpretability
Building on Circuits thread
C
7.8
Digging into polysemantic neuron examples and trying to understand better what's going on there.
73
Image Model Interpretability
Multimodal models (CLIP interpretability)
C
7.11
Can you rigorously reverse engineer any circuits, like the Curve Circuits paper?
74
Image Model Interpretability
Multimodal models (CLIP interpretability)
C
7.12
Can you apply transformer circuits techniques to understand the attention heads in the image part?
75
Image Model Interpretability
C
7.14
Train a checkpointed run of Inception. Do curve detectors form as a phase change?
76
Interpreting Reinforcement Learning
AlphaZero
C
8.1
Replicate some of Tom McGrath's AlphaZero work with LeelaChessZero. Use NMF on the activations and trying to interpret some. See visualisations here.
77
Interpreting Reinforcement Learning
Goal misgeneralisation
C
8.5
Intrepret one of the examples in the goal misgeneralisation papers (Langosco et al and Shah et al). Can you concretely figure out what's going on?
78
Interpreting Reinforcement Learning
Goal misgeneralisation
C
8.7
Using 8.5: Possible starting point - CoinRun. Interpreting RL Vision made significant progress and Langosco et al found it was an example of goal misgeneralisation - can you build on these to predict the misgeneralisation?
79
Interpreting Reinforcement Learning
C
8.1
Train and interpret a model from the In-Context Reinforcement Learning and Algorithmic Distillation paper. They trained small transformers where they input a sequence of moves for a "novel" RL task and the model outputs sensible answers for that task.
10/april/2023-Victor Levoso and others , working on reinplementing AD to try this, we have a channel for it on this discord: https://discord.gg/cMr5YqbU4y
80
Interpreting Reinforcement Learning
Interpreting RLHF Transformers
C
8.12
Can you find any circuits in CarperAI's RLHF model corresponding to longer term planning?
81
Interpreting Reinforcement Learning
Interpreting RLHF Transformers
C
8.13
Can you get any traction on interpreting CarperAI's RLHF model's reward model?
82
Interpreting Reinforcement Learning
C
8.15
Try training and interpreting a small model from Guez et al. They trained model-free RL agents and showed evidence they spontaneously learned planning. Can you find evidence for/against this?
83
Interpreting Reinforcement Learning
C
8.19
Can you interpret a model on a task from 8.16-8.18 using Q-Learning?
84
Interpreting Reinforcement Learning
C
8.2
Take an agent trained with RL and train another network to copy the output logits of that agent. Try to reverse engineer the clone. Can you find the resulting circuits in the original?
85
Interpreting Reinforcement Learning
C
8.21
Once you've got traction understanding a fully trained agent on a task elsewhere in this category, try to extend this understanding to study it during training. Can you get any insight into what's actually going on?
86
Studying Learned Features in Language Models
Seeking out specific features
C
9.27
Search for neurons that clean up superposition interference.
87
Studying Learned Features in Language Models
Seeking out specific features
C
9.36
Try training linear probes for features from 9.13-9.35.
88
Studying Learned Features in Language Models
Seeking out specific features
C
9.37
Using 9.36 - How does your ability to recover features from the residual stream compare to MLP layer outputs vs. attention layer outputs? Can you find features that can only be recovered from some of these?
89
Studying Learned Features in Language Models
Seeking out specific features
C
9.38
Using 9.36 - Are there features that can only be recovered from certain MLP layers?
90
Studying Learned Features in Language Models
Seeking out specific features
C
9.39
Using 9.36 - Are there features that are significantly easier to recover from early layer residual streams and not from later layers?
91
Studying Learned Features in Language Models
Miscellaneous
C
9.58
Replicate Knowledge Neurons in Pretrained Transformers on a generative model. How much are these results consistent with what Neuroscope shows?
No results from filter
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.