AI Alignment @ UW

We're a student group trying to reduce risks from advanced AI.

We run a quarterly research program where members conduct technical AI safety research, focusing on areas including mechanistic interpretability and machine honesty. We will be working closely with the EleutherAI interpretability team.

We also run a beginner-friendly reading group covering foundational topics in AI safety

Express interest here and you can join our Discord at discord.gg/m7Egw5uc8.

Feel free to send questions to dury@uw.edu