Targeted to people who specifically want to get employed/funded post MATS + Refine + ARENA + REMIX + MLAB + WMLB + PIBBSS (i.e., doesn't include random Discords or Slacks or programs graduates have exceeded);
Curated and up-to-date.
Ideally, this would happen by the first week of Feb (to capture MATS 3.0 alumni).
Some approaches to solving alignment go through teaching ML systems about alignment and getting research assistance from them. Training ML systems needs data, but we might not have enough alignment research to sufficiently fine tune our models, and we might miss out on many concepts which have not been written up. Furthermore, training on the final outputs (AF posts, papers, etc) might be less good at capturing the thought processes which go into hashing out an idea or poking holes in proposals which would be the most useful for a research assistant to be skilled at.
It might be significantly beneficial to capture many of the conversations between researchers, and use them to expand our dataset of alignment content to train models on. Additionally, some researchers may be fine with having their some of their conversations available to the public, in case people want to do a deep dive into their models and research approaches.
The two parts of the system which I'm currently imagining addressing this are:
An email where audio files can be sent, automatically run through
Clear instructions for setting up a tool which captures audio from calls automatically (either a general tool or platform-specific advice), and makes it as easy as possible to send the right calls to the dataset platform.
plex, michael trazzi would be a good person to talk to (he's already hiring ppl to edit otter docs), and daniel filan
AI Safety PhD cohorts
The idea is to collect and connect everyone interested in AIS who are applying for a PhD each year.
Find all aspiring AIS researchers who wants to start a PhD in 2024. Put them in the same slack or discord. Organize the effort to look in to different programs and share information with each other. Those who wants to can coordinate where they apply to end up at the same university, and thus be sure to have at least one other AI Safety researcher around.
E.g: find all aspiring AIS researchers who wants to start a PhD in 2024. Put them in the same slack or discord. Organize the effort to look in to different programs and share information with each otherer. Those who wants to can coordinate where they apply to end up at the same university, and thus be sure to have at least one other AI Safety researcher around.
Our epistemics and mutual understanding could be improved with regular debates/adversarial collaborations between alignment researchers who disagree on particular topics.
I think about something similar to
but in audio (+video) format and open to people’s suggestions about who should we “pitch against” whom and what topic we’d like them to discuss.
Spencer Greenberg’s podcast has several episodes that can serve as an example: