An active grassroots collective of researchers working to open source AI research, mainly on large language models, with a strong focus on alignment. Home of the best alignment memes channel on the internet, and twin to AI Safety startup
The Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
Taskforce creating infrastructure for the alignment ecosystem (like this document!), and connecting people who want to help with volunteering opportunities in monthly project pitch calls.
Open
Active
Main focus
Online
12
Cyborgism
Discord
A place to discuss AI-assisted thought to accelerate alignment progress. Invite only.
Shared channel for AI Safety for organizations and individuals around London. Invite only, message Maris (maris@conjecture.dev) with your involvement in AI Safety to join.
The AI Safety Initiative Groningen provides students and scholars with a platform to learn about and discuss potential risks from advanced artificial intelligence. At AISIG, we host workshops, talks, hackathons, and the AGI Safety Fundamentals course by BlueDot Impact to educate and spark discussions on this important topic.
The scene in London is small but growing, and most people haven't had much chance to meet and socialise with people from different labs. This event will have a relaxed setting where you'll have the chance to propose discussions around your area of interest unconference-style, but the emphasis is on putting names to faces and making friends rather than a formal conference.
Are you one of the 20,000+ EAs who want to contribute but don't know how? Superlinear is for you: 1) Win prizes to make money and do good. 2) Submit your prize idea - we may fund it!
connects you with interesting collaborators worldwide to discuss and decide on a concrete research proposal, gear up online as a team, and try your hand at AI safety research during intensive coworking sprints.
An AI safety friend group that meets in VR. VR is peerless for distributed realtime audio conversations, making friends online, and drawing 3D diagrams.
I’ve checked ”both” for the “online/in person” question, because VR makes online meetings feel like meeting in person, and that is actually quite serious.
This is a discussion group about advances in artificial intelligence, and how to keep it robust and beneficial to humanity. This group is primarily for people who have experience in AI/ML and/or are familiar with AI safety. We encourage beginners to join the
This is a group for sharing content or asking questions relevant to effective altruism. Because the group is so large and active, t try to select the posts that we expect members will find most useful, interesting, and relevant.
are our attempt to transfer the benefits of startup equity to the nonprofit space – including seed investment, employee participation, price discovery, and more.
Alternately, you can think of impact markets as series of nested prize contests.
Our vision is one where there is a separation of concerns between philanthropic funders that research what outcomes the world needs, charities that produce these outcomes as efficiently as possible, and investors who help the charities succeed.
Impact markets are a powerful mechanism to realize this vision. We need to get them just right to harness their power. We invite you to learn more about our plans and weigh in yourself on Discord.
A place for collaboration, education, and conversation! We're very glad you can join us! This is a place to learn from each other and share what you know. It's a place to have your plans evaluated and to find collaborators, or to contribute to something you find that may be meaningful, or just personally useful. It's a place to work together on art, music, science or engineering and more. All are welcome, as long as you want to learn from or with others, and to meet interesting people. In this place, we aim to accept critiques as information given with the aim to make us better somehow. We offer opportunities to others, but we do not press. Both being asked if interested in doing something and the other person saying no are completely fine. You have no obligations here, only opportunities. Feel free to learn or contribute as much or as little as you'd like!
Usually, we start with small-talk and a presentation round, then the host gives a summary of the paper for roughly 20 minutes. The summary of the article is uploaded on our
This is a science and engineering based forum created to discuss the various aspects of AI and AGI safety. Topics may include research, design, development and implementation of AI systems. Regulation, legislation, ethics as well as technological unemployment can all be discussed.
Open
Slightly active
Main focus
Online
50
MIRIx Discord
Discord
Discussions of AI alignment with separate channels for different technical topics. There is also a channel for asking math questions. Invite is necessary to join.
Open
Active
Main focus
Online
51
Alignment Ecosystem Slack
Slack
Invite-only Slack for people actively working on ecosystem and field-building projects. Contact
An effort to prepare the alignment community for severe catastrophes other than AI, so that we can preserve and continue research in as many timelines as possible.
The Oxford AI Safety Reading Group runs two events, and was formed by merging the FHI Alignment Newsletter Reading Group and the Oxford Introductory AI Safety Reading Group. The first event is a weekly hour-long session (Thursday 12pm-1pm) in which we read the Alignment Newsletter together and discuss the summaries therein; no preparation is required. The second event is a fortnightly hour-long session (Tuesday 12pm-1pm) in which we select a paper to read in advance and then come together to discuss thoughts, uncertainties, and related ideas. The group is aimed at those in Oxford with a technical background who are conducting (or are interested in conducting) AI safety research, though anyone is welcome to join.
The goal of this group is to provide a space for people to better coordinate on accelerating the speed of alignment research, slowing progress in AI capabilities, and finding new tactics to reduce X-Risk from AI.
brings together undergraduates, postgraduates, and researchers to explore AI safety research aimed at reducing catastrophic risk from advanced AI systems. We host talks and Q&As from AI safety and governance researchers, run reading groups on the fundamentals of AI safety and governance, and conduct AI safety research with mentorship from researchers.
Join fellow Cambridge students, researchers and professionals to chat about key concepts and new research in AI safety. We meet weekly during term for either a discussion group, invited talk or social. Find out more (and sign up to our mailing list) via
BAISSI is a student organization at UC Berkeley that aims to foster a vibrant, collaborative AI Safety/Alignment community, supporting students in pursuing high-impact AI Safety careers and research.
SAIA’s mission is to accelerate students into highly-impactful careers in AI safety, build the AI alignment community at Stanford, and do excellent research that makes transformative artificial intelligence go well.
CASI support students at CMU in learning about AI safety and acquiring the right skills and experiences to be able to contribute to this emerging field.