Skip to content
Unjournal Budget - Public
Unjournal Budget - Public

icon picker
Projects for Additional Funding

Evaluating further research relevant to AI safety and AI governance

Result Goals

Offering high-quality, public evaluations for potentially impactful research that is relevant to AI safety and governance (focusing on economics, quant. social science, policy, prediction, and impact measurement)
Offering researchers a rigorous, impact-focused alternative to traditional academic peer review. Systematic, rigorous, public, quantitative expert rating and evaluation.
Building The Unjournal’s AI safety, AI governance, and GCR and X-risk teams. Increasing our engagement and coverage in this area and our coverage of impact.

Procedural Goals

Evaluating more papers and research projects in areas relevant to AI safety.
Expanding the pool of field specialists working with The Unjournal who can contribute to this expansion into the field.
Recruiting one or more management board member with a background in AI safety

Minimum necessary funding - $27,000
Comfortable funding level - $300,000
Maximum funding - $1,000,000 per year for the next 3 years
Approximate basic costs
Amount required
Per extra field specialist in team, recruitment and 1 year
Additional management board member
Per additional paper evaluated
There are no rows in this table

Minimum necessary funding

It costs about $2700 per additional paper or project we have evaluated if we consider all staff time etc. This includes time spent prioritizing the work (~$500), engaging with authors ($100), recruiting and instructing an evaluation manager (~$200), paying them for their time ($300), compensating and incentivizing two evaluators (2 x $500), managing the process and pipeline ($200), and curating, posting, and promoting the output ($250), and average prizes for authors (~$150).

Comfortable funding level

$300,000 would enable us to grow our AI-focused teams to the size of our other groups, targeting about 40-50 papers/projects specifically connected to this over about 12 months.
45*$2700 = $135,000 for evaluating these papers
$35k for building and strengthening the field specialist and management team in these areas
$130k to cover half of the “overhead” keeping us sustainable for about 12 additional months (Director salary, Operations Lead salary, tech support, software and platform costs)

Maximum funding

$750,000 per year for the next 3 years, considering our reasonable limits of growth. This would include.
An expanded agenda (augmenting the AI-linked work to ~70 projects per year, and maintaining the non-AI target of 70 per year)
Hosting and engaging more actively with in-person (and robust online) conferences, seminar series, retreats, hackathons, including in Academia, EA, tech, etc.
Holding training sessions and building educational workshops and offerings (such as MOOC)
UX testing, calibrating and improving our interfaces
Engaging with replication exercises
Integrating and engaging with prediction markets
Funding meta-researchers and PhD student projects
Enabling and assisting spin-off initiatives focusing on areas such as technical AI interpretability, and legal scholarship on regulating AI, sharing our tools and protocols.

What would we likely do if we do not get funds for this project?

Currently, we still have approximately $400,00 USD in funding that is projected to be enough for the next year with some surplus. We will apply to other EA-related and non-EA related granting bodies and forge stronger connections with GiveWell.
We are very likely to continue operating as The Unjournal even without this funding, but we are quite unlikely to be able to expand our program into the AI safety field without this funding.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.