An active grassroots collective of researchers working to open source AI research, mainly on large language models, with a strong focus on alignment. Home of the best alignment memes channel on the internet, and twin to AI Safety startup
The Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
connects you with interesting collaborators worldwide to discuss and decide on a concrete research proposal, gear up online as a team, and try your hand at AI safety research during intensive coworking sprints.
An AI safety friend group that meets in VR. VR is peerless for distributed realtime audio conversations, making friends online, and drawing 3D diagrams.
I’ve checked ”both” for the “online/in person” question, because VR makes online meetings feel like meeting in person, and that is actually quite serious.
This is a discussion group about advances in artificial intelligence, and how to keep it robust and beneficial to humanity. This group is primarily for people who have experience in AI/ML and/or are familiar with AI safety. We encourage beginners to join the
This is a group for sharing content or asking questions relevant to effective altruism. Because the group is so large and active, t try to select the posts that we expect members will find most useful, interesting, and relevant.
are our attempt to transfer the benefits of startup equity to the nonprofit space – including seed investment, employee participation, price discovery, and more.
Alternately, you can think of impact markets as series of nested prize contests.
Our vision is one where there is a separation of concerns between philanthropic funders that research what outcomes the world needs, charities that produce these outcomes as efficiently as possible, and investors who help the charities succeed.
Impact markets are a powerful mechanism to realize this vision. We need to get them just right to harness their power. We invite you to learn more about our plans and weigh in yourself on Discord.
A place for collaboration, education, and conversation! We're very glad you can join us! This is a place to learn from each other and share what you know. It's a place to have your plans evaluated and to find collaborators, or to contribute to something you find that may be meaningful, or just personally useful. It's a place to work together on art, music, science or engineering and more. All are welcome, as long as you want to learn from or with others, and to meet interesting people. In this place, we aim to accept critiques as information given with the aim to make us better somehow. We offer opportunities to others, but we do not press. Both being asked if interested in doing something and the other person saying no are completely fine. You have no obligations here, only opportunities. Feel free to learn or contribute as much or as little as you'd like!
This is a science and engineering based forum created to discuss the various aspects of AI and AGI safety. Topics may include research, design, development and implementation of AI systems. Regulation, legislation, ethics as well as technological unemployment can all be discussed.
Discussions of AI alignment with separate channels for different technical topics. There is also a channel for asking math questions. Invite is necessary to join.
Alignment Ecosystem Slack
Invite-only Slack for people actively working on ecosystem and field-building projects. Contact
The Oxford AI Safety Reading Group runs two events, and was formed by merging the FHI Alignment Newsletter Reading Group and the Oxford Introductory AI Safety Reading Group. The first event is a weekly hour-long session (Thursday 12pm-1pm) in which we read the Alignment Newsletter together and discuss the summaries therein; no preparation is required. The second event is a fortnightly hour-long session (Tuesday 12pm-1pm) in which we select a paper to read in advance and then come together to discuss thoughts, uncertainties, and related ideas. The group is aimed at those in Oxford with a technical background who are conducting (or are interested in conducting) AI safety research, though anyone is welcome to join.
brings together undergraduates, postgraduates, and researchers to explore AI safety research aimed at reducing catastrophic risk from advanced AI systems. We host talks and Q&As from AI safety and governance researchers, run reading groups on the fundamentals of AI safety and governance, and conduct AI safety research with mentorship from researchers.
Join fellow Cambridge students, researchers and professionals to chat about key concepts and new research in AI safety. We meet weekly during term for either a discussion group, invited talk or social. Find out more (and sign up to our mailing list) via
SAIA’s mission is to accelerate students into highly-impactful careers in AI safety, build the AI alignment community at Stanford, and do excellent research that makes transformative artificial intelligence go well.