Skip to content
Alignment Ecosystem Development
Share
Explore

icon picker
aisafety.community

Feel free to improve and add to this living document of communities working on AI existential safety!
AI Safety Communities
3
Platform
Status
AI Safety Focus
Type
Community
Platform
Description
Status
AI Safety Focus
Type
Image 17
Slack
A big community of people interested in AI alignment, with channels ranging from general topics to specific fields to local groups.
Open
Lively
Main focus
Online
2
Twitter
Twitter list of thinkers in AI with a focus on AI/ML Safety and Alignment.
Open
Lively
Main focus
Online
3
Discord
An active grassroots collective of researchers working to open source AI research, mainly on large language models, with a strong focus on alignment. Home of the best alignment memes channel on the internet, and twin to AI Safety startup .
Open
Lively
Main focus
Online
4
Forum Magnum
Online forum focused on rationality, and the main forum for AI safety content.
Open
Lively
Partial focus
Online
5
Gather Town
Online social coworking space that also has a Discord channel with a sub channel discussing AI.
Open
Active
Partial focus
Online
6
Forum Magnum
The Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
Open
Lively
Main focus
Online
Slack
Coordination group for EU AI Safety research, events, and upskilling.
Website
. Announcement on EA Forum
. Sign up link
.
Open
Active
Main focus
Online
In person
Slack
A Slack group for people who are interested in working in AI safety, and would like extra accountability and motivation while they pursue their goals.
Open
Active
Main focus
Online
9
Discord
Discord server for the community participating in the hackathons.
Open
Active
Main focus
Online
Forum Magnum
Online forum that discusses Effective Altruism. Some AI safety posts are posted here, some of which are cross-posted to the LessWrong.
Open
Lively
Partial focus
Online
Discord
Taskforce creating infrastructure for the alignment ecosystem (like this document!), and connecting people who want to help with volunteering opportunities in monthly project pitch calls.
Open
Active
Main focus
Online
12
Cyborgism
Discord
A place to discuss AI-assisted thought to accelerate alignment progress. Invite only.

Open
Active
Main focus
Online
Slack
Shared channel for AI Safety for organizations and individuals around London. Invite only, message Maris (maris@conjecture.dev) with your involvement in AI Safety to join.
Open
Lively
Main focus
In person
Online
Discord
Public AI safety discussion chat server focused on answering questions for .
Open
Active
Main focus
Online
Discord
A group for collaborating on mechanistic interpretability projects.
Open
Active
Main focus
Online
16
Mailing list
日本の一番セーフティー団体です。週に勉強会も月に大会話もできます。サイトで全部ご案内します。よろしくお願いします。
We’re Japan’s #1 safety community. We have a benkyōkai every week and a big social every month. Please see our site for details!
Open
Lively
Main focus
In person
Local
A central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity.
Open
Active
Main focus
In person
Local
Newsletter
Other
Mailing list
The AI Safety Initiative Groningen provides students and scholars with a platform to learn about and discuss potential risks from advanced artificial intelligence. At AISIG, we host workshops, talks, hackathons, and the AGI Safety Fundamentals course by BlueDot Impact to educate and spark discussions on this important topic.
Open
Lively
Main focus
In person
19
Local
A meetup for alignment researchers in London UK:
The scene in London is small but growing, and most people haven't had much chance to meet and socialise with people from different labs. This event will have a relaxed setting where you'll have the chance to propose discussions around your area of interest unconference-style, but the emphasis is on putting names to faces and making friends rather than a formal conference.
Open
Lively
Main focus
In person
Slack
A Slack for people interested in slowing down or pausing the training of giant language models.
Open
Lively
Main focus
Online
21
Other
A platform for people to self-study AI safety training courses with peer cohorts. Platform built, but needs a community manager for full launch.
Open
Slightly active
Main focus
Online
Discord
Are you one of the 20,000+ EAs who want to contribute but don't know how? Superlinear is for you: 1) Win prizes to make money and do good. 2) Submit your prize idea - we may fund it!
Open
Active
Main focus
Online
23
Facebook
connects you with interesting collaborators worldwide to discuss and decide on a concrete research proposal, gear up online as a team, and try your hand at AI safety research during intensive coworking sprints.
Open
Slightly active
Main focus
In person
Online
Discord
For discussions on .
Open
Active
Main focus
Online
Reddit
For discussions on the alignment problem (not limited to the narrow sense of the control problem)
Open
Active
Main focus
Online
Facebook
This group is for people who are interested in careers in AI Safety
Open
Active
Main focus
Online
27
AGI Safety Fundamentals
Slack
For participants and facilitators of AGI Safety Fundamentals program organized by BlueDot Impact. https://www.agisafetyfundamentals.com/
Open
Active
Main focus
Online
28
Discord
📃 What to do on the server 
1. 🤔 Discuss AI safety and the Apart projects in the Discussion category!
2. 👩‍💻 Perform your own independent research in AI safety with ideas from , and discussions and mentorship with our core team and the members here
3. 👨‍🔬 Join some of our projects under the Research Projects category like safety-timelines and empathetic-ai 
4. 🌍 Add to the open source projects we maintain like
5. 🔗 Visit our research facilitation efforts on , , and
Open
Active
Main focus
Online
Discord
A server for participants to discuss contest problems, form research teams, and chat about AI alignment.
Open
Slightly active
Main focus
Online
Facebook
This group is for people in Australia or New Zealand who are interested in preventing existential risk from AI.
Open
Active
Main focus
In person
Online
Discord
An AI safety friend group that meets in VR. VR is peerless for distributed realtime audio conversations, making friends online, and drawing 3D diagrams. I’ve checked ”both” for the “online/in person” question, because VR makes online meetings feel like meeting in person, and that is actually quite serious.
Open
Active
Partial focus
In person
Online
Facebook
Discussion group for the AI Safety community in Denmark.
Open
Active
Main focus
Online
In person
Facebook
This is a discussion group about advances in artificial intelligence, and how to keep it robust and beneficial to humanity. This group is primarily for people who have experience in AI/ML and/or are familiar with AI safety. We encourage beginners to join the group.
Open
Active
Main focus
Online
Facebook
This is a discussion group about advances in artificial intelligence, and how to keep it robust and beneficial to humanity.
Open
Slightly active
Main focus
Online
Telegram
Telegram group for discussion on AI Safety research
Open
Lively
Main focus
Online
36
Slack
The Canada AI Governance and Safety network does field-building and advocacy in Canada. Anyone interested in AI Alignment in Canada is welcome to
. More info at https://aigs.ca.
Open
Active
Main focus
Online
In person
37
Discord
Huge community of EAs on discord, with no particular channel specifically on AI safety.
Open
Lively
Adjacent
Online
38
Local
Database of EA local groups (not necessarily focused on AI safety)
Open
Lively
Partial focus
In person
39
Slack
Slack for EA groups with an active AI Safety fieldbuilding channel (join via add channels).
Open
Active
Partial focus
Online
Facebook
This is a group for sharing content or asking questions relevant to effective altruism. Because the group is so large and active, t try to select the posts that we expect members will find most useful, interesting, and relevant.
he admins
Open
Lively
Partial focus
Online
Discord
For discussions on the YouTube channel Rational Animations.
Open
Lively
Adjacent
Online
42
Discord
are our attempt to transfer the benefits of startup equity to the nonprofit space – including seed investment, employee participation, price discovery, and more.
Alternately, you can think of impact markets as series of nested prize contests.
Our vision is one where there is a separation of concerns between philanthropic funders that research what outcomes the world needs, charities that produce these outcomes as efficiently as possible, and investors who help the charities succeed.
Impact markets are a powerful mechanism to realize this vision. We need to get them just right to harness their power. We invite you to learn more about our plans and weigh in yourself on Discord.
Open
Active
Adjacent
Online
Discord
A place for collaboration, education, and conversation! We're very glad you can join us! This is a place to learn from each other and share what you know. It's a place to have your plans evaluated and to find collaborators, or to contribute to something you find that may be meaningful, or just personally useful. It's a place to work together on art, music, science or engineering and more. All are welcome, as long as you want to learn from or with others, and to meet interesting people. In this place, we aim to accept critiques as information given with the aim to make us better somehow. We offer opportunities to others, but we do not press. Both being asked if interested in doing something and the other person saying no are completely fine. You have no obligations here, only opportunities. Feel free to learn or contribute as much or as little as you'd like!
Open
Active
Adjacent
Online
Skype
Usually, we start with small-talk and a presentation round, then the host gives a summary of the paper for roughly 20 minutes. The summary of the article is uploaded on our . This is followed by discussion (both on the article and in general) and finally we decide on a paper to read the following week.
Join us by Skype, by adding ‘soeren.elverlin’. Also check out our AI Safety Danmark.

Open
Active
Main focus
Online
Slack
Spanish EA and Rationalist Slack
Open
Slightly active
Partial focus
Online
Discord
French AI Safety Discord/Discord français sur la sureté de l’IA
Open
Active
Main focus
Online
Slack
Slack for Creatives and Communicators in EA.
Open
Active
Adjacent
Online
Discord
Discord for Software Engineers in EA.
Open
Active
Adjacent
Online
49
Facebook
This is a science and engineering based forum created to discuss the various aspects of AI and AGI safety. Topics may include research, design, development and implementation of AI systems. Regulation, legislation, ethics as well as technological unemployment can all be discussed.
Open
Slightly active
Main focus
Online
50
MIRIx Discord
Discord
Discussions of AI alignment with separate channels for different technical topics. There is also a channel for asking math questions. Invite is necessary to join.
Open
Active
Main focus
Online
51
Alignment Ecosystem Slack
Slack
Invite-only Slack for people actively working on ecosystem and field-building projects. Contact if you think you should be there.
Open
Slightly active
Main focus
Online
52
SERI Conference 2022
Slack
Slack channel created for SERI Conference 2022, with various channels for different x-risk cause areas. Also hosts the London AI Safety hub.
Open
Slightly active
Partial focus
In person
Online
53
Discord
Discord for MIRI’s $1m .
Open
Slightly active
Main focus
Online
Slack
Slack for EAs in the East Bay Area.
Open
Slightly active
Partial focus
In person
Online
Discord
An effort to prepare the alignment community for severe catastrophes other than AI, so that we can preserve and continue research in as many timelines as possible.
Open
Slightly active
Adjacent
Online
Facebook
The Oxford AI Safety Reading Group runs two events, and was formed by merging the FHI Alignment Newsletter Reading Group and the Oxford Introductory AI Safety Reading Group. The first event is a weekly hour-long session (Thursday 12pm-1pm) in which we read the Alignment Newsletter together and discuss the summaries therein; no preparation is required. The second event is a fortnightly hour-long session (Tuesday 12pm-1pm) in which we select a paper to read in advance and then come together to discuss thoughts, uncertainties, and related ideas. The group is aimed at those in Oxford with a technical background who are conducting (or are interested in conducting) AI safety research, though anyone is welcome to join.
Open
Inactive
Main focus
In person
Online
Telegram
Russian AGI safety discussion group in Telegram/Группа обсуждения безопасности ИИ на русском.
Open
Lively
Main focus
Online
Local
Other
We are a nonprofit California public benefit corporation dedicated to building trust and ethical alignment within AI.
Open
Lively
Main focus
Online
In person
59
Berkeley AI Safety Initiative for Students (BASIS)
Discord
Mailing list
University group for UC Berkeley.
Open
Active
Main focus
In person
Online
60
Discord
Work towards a pause of developing AI systems more powerful than GPT-4. Lobbying, organising protests, drafting policy proposals.
Open
Lively
Main focus
Online
61
Discord
Other
A volunteer advocacy / outreach group coming up with and pursuing strategies to stimulate the public discourse on AI safety and existential risk.
Open
Active
Main focus
Online
Discord
The goal of this group is to provide a space for people to better coordinate on accelerating the speed of alignment research, slowing progress in AI capabilities, and finding new tactics to reduce X-Risk from AI.
Open
Active
Main focus
Online
No results from filter

See for most up-to-date collection of student entries.
University groups
3
Community
Platform
Website
Mailing list
Calendar
Contact
Description
AI Safety Focus
1
Local
Facebook
The brings together undergraduates, postgraduates, and researchers to explore AI safety research aimed at reducing catastrophic risk from advanced AI systems. We host talks and Q&As from AI safety and governance researchers, run reading groups on the fundamentals of AI safety and governance, and conduct AI safety research with mentorship from researchers.
Open
Main focus
Facebook
Local
Join fellow Cambridge students, researchers and professionals to chat about key concepts and new research in AI safety. We meet weekly during term for either a discussion group, invited talk or social. Find out more (and sign up to our mailing list) via .
Open
Main focus
3
Harvard AI Safety Team
Local
A research-focused student group aimed at encouraging Harvard undergrads to pursue careers in technical AI safety.
Open
Main focus
Discord
Mailing list
Local
A community of people interested in ensuring that the development of artificial intelligence benefits humanity’s long-term future. , , .

Open
Main focus
Discord
Slack
Local
An academic community working to ensure artificial intelligence is developed to the benefit of our future. Also has a .

Open
Main focus
6
Safe AI London (SAIL)
Newsletter
Local
Safe AI London supports individuals in London that are interested in reducing the risks posed from advanced artificial intelligence.

Open
Main focus
Discord
Local
BAISSI is a student organization at UC Berkeley that aims to foster a vibrant, collaborative AI Safety/Alignment community, supporting students in pursuing high-impact AI Safety careers and research.
Open
Main focus
8
Discord
Local
Warwick AI student group.
Open
Adjacent
9
Stanford AI Alignment
Mailing list
Local
SAIA’s mission is to accelerate students into highly-impactful careers in AI safety, build the AI alignment community at Stanford, and do excellent research that makes transformative artificial intelligence go well.
Open
Main focus
Discord
Newsletter
Local
A research focused club which strives to provide students the tools and guidance to tackle the problems that interest them.
Open
Main focus
11
Princeton AI Alignment
Local
An academic community working to ensure artificial intelligence is developed to the benefit of humanity's future.
Open
Main focus
12
MIT AI Alignment
Local
A student group conducting research to ensure AI development happens safely.
Open
Main focus
Discord
Local
A student org focused on AI safety.

Open
Main focus
Discord
Local
CASI support students at CMU in learning about AI safety and acquiring the right skills and experiences to be able to contribute to this emerging field.
Open
Main focus
No results from filter

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.