stampy.ai / is a single point of access for learning about AGI safety created by Rob Miles's volunteer team. Accessibility testing volunteer wanted! OpenReact/Remix (UI), Python (Discord bot), Coda + Google Docs (edit interface)
plex
EleutherAI is a grassroots collective of researchers working to open source AI research projects. They have many active alignment channels. Open
Needs a community manager. Open
Wordpress plugins ( & ), community management
A feed for AI Safety content, personalized to optimize for intellectual growth. Should use sources from the alignment dataset. Open
TBD
plex
Research for Rob Miles videos
EigenTrust is a mechanism for scaling trust by allowing individuals to leverage their network's combined experience. First, peer vetting of alignment research contributions at scale. Then, the world! OpenPython (server and bot), Remix/TypeScript (website)
TJ
Databases of research and project ideas towards alignment, with features for fostering collaboration and connecting teams. Open
Could someone make a launch page specifically for these entities, plus any I've missed, complete with small descriptions of their cruxes and aims?
Alignment research employers: , , , , , , , , , , , , , , ; Academic alignment research hubs: , , , ; Independent research funders: . Also would be sweet to have a feed of the and the . I have a very specific vision:
As professional-looking as the and the ; As cruxy and useful as ""; Targeted to people who specifically want to get employed/funded post MATS + Refine + ARENA + REMIX + MLAB + WMLB + PIBBSS (i.e., doesn't include random Discords or Slacks or programs graduates have exceeded); Ideally, this would happen by the first week of Feb (to capture MATS 3.0 alumni). Open
softr+airtable
hello@plex.ventures
The open-source codebase that the AI Alignment Forum, Effective Altruism Forum, and LessWrong run on, supporting alignment discussion. Open
Some approaches to solving alignment go through teaching ML systems about alignment and getting research assistance from them. Training ML systems needs data, but we might not have enough alignment research to sufficiently fine tune our models, and we might miss out on many concepts which have not been written up. Furthermore, training on the final outputs (AF posts, papers, etc) might be less good at capturing the thought processes which go into hashing out an idea or poking holes in proposals which would be the most useful for a research assistant to be skilled at. It might be significantly beneficial to capture many of the conversations between researchers, and use them to expand our dataset of alignment content to train models on. Additionally, some researchers may be fine with having their some of their conversations available to the public, in case people want to do a deep dive into their models and research approaches. The two parts of the system which I'm currently imagining addressing this are: An email where audio files can be sent, automatically run through , and added to the . Clear instructions for setting up a tool which captures audio from calls automatically (either a general tool or platform-specific advice), and makes it as easy as possible to send the right calls to the dataset platform. Open
plex, michael trazzi would be a good person to talk to (he's already hiring ppl to edit otter docs), and daniel filan
The idea is to collect and connect everyone interested in AIS who are applying for a PhD each year. Find all aspiring AIS researchers who wants to start a PhD in 2024. Put them in the same slack or discord. Organize the effort to look in to different programs and share information with each other. Those who wants to can coordinate where they apply to end up at the same university, and thus be sure to have at least one other AI Safety researcher around.
E.g: find all aspiring AIS researchers who wants to start a PhD in 2024. Put them in the same slack or discord. Organize the effort to look in to different programs and share information with each otherer. Those who wants to can coordinate where they apply to end up at the same university, and thus be sure to have at least one other AI Safety researcher around.
We have and a with logo ready for this. Open
Discord, Carrd
Enhancing AI safety ecosystem via debates Our epistemics and mutual understanding could be improved with regular debates/adversarial collaborations between alignment researchers who disagree on particular topics.
I think about something similar to but in audio (+video) format and open to people’s suggestions about who should we “pitch against” whom and what topic we’d like them to discuss.
Spencer Greenberg’s podcast has several episodes that can serve as an example: . Open
bagginsmatthew@gmail.com
AGI Safety Fundamentals Video Playlist
YouTube Playlist
plex, ccstan99
Reading What We Can is a web page which has reading challenges designed to upskill people rapidly on alignment topics. Open
A living document listing AI Safety communities. Open
A living document listing AI Safety training programs, with dates and applications. Open
plex
A homepage for AI safety, linking out to the relevant parts of the ecosystem. Open
Alignment Project Factory