Seminar · Fellowship · Research

Study, Explain, and Try to Solve
Superintelligence Alignment

AFFINE is an intensive program designed to give promising newcomers the opportunity to acquire deep understanding of the core problems of superintelligence alignment, making them better equipped for work on the mitigation of AI existential risk.

Applications close 8th March 2026

About the Program

Tackling the neglected core of AI safety

ASI breaking out of human control and pursuing ends misaligned with human flourishing is a central form of catastrophic and existential risk. Despite this, research aiming at finding a solution to the ASI alignment problem is systematically neglected by the general AI Safety ecosystem.

It is our goal to fix this inadequacy. We provide an intensive learning environment where participants engage with foundational materials, think deeply about hard problems, and solidify understanding through peer teaching and collaboration with world-class mentors.

1-Month Seminar

Intensive upskilling in superintelligence alignment at Hostačov Château, Czechia. 28 April – 28 May 2026.

Year-Long Fellowship

Extended program for ~10 top performers selected for collaborative excellence, not competition.

Expert Mentorship

Direct access to researchers from MIRI, DeepMind, Astera Institute, and Mila. Office hours, 1-on-1s, public debates.

World-Class Mentors

Learn from leading alignment researchers

Our mentors represent the frontier of AI alignment research—from foundational theory at MIRI to safety work at DeepMind, Anthropic, and Astera Institute.

Abram Demski

Abram Demski

Independent · Ex-MIRI

Co-author of Embedded Agency. Decision theory, logical uncertainty, agent foundations.

Steve Byrnes

Steve Byrnes

Astera Institute

Author of Intro to Brain-Like AGI Safety. UC Berkeley PhD, Harvard postdoc.

RK

Ramana Kumar

Google DeepMind

AI Safety researcher. Self-referential reasoning, agent incentives, formal verification.

KS

Kaj Sotala

Ex-MIRI · CLR

Author of Multiagent Models of Mind. AGI risk, consciousness research.

KH

Kaarel Hänni

Safe AI for Humanity · Mila

Mathematical foundations of superposition and interpretability.

LL

Linda Linsefors

AI Safety Camp

Physicist. Superposition, interpretability, neural network theory.

MIRI DeepMind Astera Mila Cambridge UC Berkeley Harvard

Program Details

Building expertise in AI alignment research

Dates 28 April – 28 May 2026
Location Hostačov, Czechia 🇨🇿
Positions 30 participants
Cost Free (accommodation & catering covered)
Application deadline: 8th March 2026

The AFFINE Superintelligence Alignment Seminar is a one-month intensive designed to give promising newcomers an opportunity to acquire deep understanding of the core problems of superintelligence alignment.

ASI breaking out of human control and pursuing misaligned ends is a central form of existential risk. Despite this, research aiming at finding a solution to the ASI alignment problem is systematically neglected. It is our goal to fix this inadequacy and provide more people with the prerequisites to tackle these core problems.

The problem is very difficult, so we focus on learning, distillation, and debate/epistemics practice for the full month, rather than trying to produce novel research. Participants understand topics by reading materials, thinking, discussing with peers and mentors, and solidifying understanding through peer teaching—1-on-1, lectures, or written materials.

Confirmed Mentors

Abram Demski · Ramana Kumar · Steve Byrnes · Kaj Sotala · Kaarel Hänni · Jonas Hallgren · Ouro · Cole Wyeth · Aram Ebtekar · Elliot Thornley · Linda Linsefors · Paul 'Lorxus' Rapoport · and more

Deep Focus

The Czech countryside setting removes urban distractions while providing space for focused solo work and spontaneous collaboration.

Sustainable Pace

Program rhythm alternates between intensive technical engagement and explicit recovery time, preventing burnout.

Path to Fellowship

If funding allows, we extend to a full-year fellowship for the ~10 most promising candidates, selected for collaborative excellence.

We are open to participation for a fraction of the duration (2–3 weeks) or fully remote participation for exceptional applicants. On-site participation requires full-time engagement and cannot be reconciled with other employment.

The AFFINE Fellowship is a year-long extension of the Seminar, offered to the ~10 most promising participants. Selection happens because of collaborative excellence, not despite it—we're looking for participants who help others learn, integrate across disciplines, and build rather than hoard knowledge.

The goal extends beyond producing ten individual researchers to creating a cohesive network that continues collaborating after the month ends, whether at CEEALAR or elsewhere. To encourage ambitious approaches not guaranteed to work, we do not expect novel and promising research outputs within the one-year time frame—though it would be a very welcome surprise.

We actively encourage collaboration to cultivate the collaborative spirit of the environment, even in the face of potential adversariality arising from competition for extended Fellowship positions.

Collaborative Selection

Fellows are selected partly based on how well they collaborated with other participants during the Seminar.

Long-term Support

Full-year structure provides the time needed for deep engagement with difficult problems without pressure for quick outputs.

Network Building

Create lasting connections with fellow researchers and mentors across the AI safety ecosystem.

Fellowship extension is contingent on additional funding. We are applying for funding to cover stipends for those who would benefit from them.

Vision: A community of researchers where fruitful ideas thrive. The role of the mentor is to help fellows learn, think, and research—in particular, to think with them in a way that makes them better learners, thinkers, and researchers in the long run.

Our ideal is that the mentor-mentee relationship be many-to-many, rather than one-to-one as is typical. We'd like much of the communication to be spent building connections between mentors, so their ways of thinking—differences, similarities, bridges—can be explored and explicated in the open. This broadens mentees' exposure to different potentially valuable ideas.

Degrees of Involvement

Minimal

Drop by a dedicated Discord channel periodically to answer fellows' questions.

Recurrent Interlocutor

More extensive back-and-forth with fellows. Interest in working in public, using fellows as thinking partners.

Visiting Speaker

Give talks on focus areas, remotely or on-site. Pre-recorded talks with live Q&A welcome.

Specifically Committed

Closer, more regular mentorship to one or more fellows for an extended period.

Mentorship Activities

1-on-1 tutoring Explaining confusing topics Office hours Research collaboration Talks & lectures Workshops Rationality exercises Feedback on written work Public debates Panels & conversations Double-crux sessions

Full-time on-site mentors spend most of the Seminar at Hostačov Château and are available to fellows for discussion throughout the program.

Our People

The team behind AFFINE

Core Team

Mateusz Bagiński

Mateusz Bagiński

Founder

Program director and organizer

Full-Time On-Site Mentors

O

Ouro

On-Site Mentor

Ex-Orthogonal

Former researcher at Orthogonal, focused on agent foundations and alignment theory.

JH

Jonas Hallgren

On-Site Mentor

Equilibria Network

Researcher working on coordination problems and game-theoretic approaches to AI alignment.

Research Mentors

Our mentors represent leading voices across AI alignment research—from foundational theory at MIRI to frontier safety work at DeepMind, Anthropic, and Astera Institute.

Abram Demski

Abram Demski

Mentor

Independent · Ex-MIRI

Former MIRI Agent Foundations researcher. Co-author of Embedded Agency. Works on logical uncertainty, decision theory, and the foundations of agency. One of the leading theorists on deconfusion research.

RK

Ramana Kumar

Mentor

Google DeepMind

AI Safety researcher at DeepMind. PhD in formal verification and theorem proving from Cambridge. Works on self-referential reasoning, side effects, and agent incentives. FLI grant recipient.

Steve Byrnes

Steve Byrnes

Mentor

Astera Institute

Research Fellow at Astera. UC Berkeley PhD (Physics), Harvard postdoc. Author of the influential Intro to Brain-Like AGI Safety sequence. Works on neuroscience-informed alignment for next-gen AI architectures.

KS

Kaj Sotala

Mentor

Ex-MIRI · Center on Long-Term Risk

Former MIRI researcher. Published on AGI risk, AI timelines, and consciousness. Author of Multiagent Models of Mind sequence. Co-author of the comprehensive AGI risk survey with Roman Yampolskiy.

KH

Kaarel Hänni

Mentor

Safe AI for Humanity · Mila

Research Scientist working on mathematical foundations of computation in superposition, interpretability, and neural network capacity. Background in combinatorics and algorithms.

CW

Cole Wyeth

Mentor

Independent Researcher

Independent alignment researcher working on theoretical foundations and deconfusion.

AE

Aram Ebtekar

Mentor

Independent Researcher

Researcher focused on agent foundations and alignment theory.

ET

Elliot Thornley

Mentor

Independent Researcher

Works on philosophical and theoretical aspects of AI alignment and decision theory.

LL

Linda Linsefors

Mentor

AI Safety Camp

Physicist and AI safety researcher. Research Coordinator at AI Safety Camp. Works on superposition, interpretability, and neural network theory. Experienced program organizer.

PR

Paul 'Lorxus' Rapoport

Mentor

Independent Researcher

Works on mathematical foundations of alignment and category-theoretic approaches to AI safety.

Our mentors have worked at

MIRI DeepMind Astera Institute Mila Cambridge UC Berkeley Harvard

Get in Touch

We'd love to hear from you

Whether you're interested in applying to our programs, exploring partnership opportunities, or simply learning more about AI alignment research, we're happy to connect.

For application inquiries, please include relevant background information and your areas of interest in AI safety research.