The AI Safety Institute Brings People Together — Don’t Let Politics Tear It Apart
In his 2024 platform, sandwiched between sections on immigration and lowering healthcare costs, Donald Trump has vowed to repeal President Joe Biden’s 2023 Executive Order on AI.
While largely symbolic — the reports, research, and guidance that made up much of the executive order can’t be unwritten, and many large AI companies (including OpenAI, Meta, Anthropic, and Google) voluntarily committed to sharing results of safety tests with the government — overturning the executive order would have one serious consequence: dissolving the US AI Safety Institute (AISI).
Located in the National Institute of Standards and Technology, AISI’s job is to balance AI innovation with safety concerns, creating a safe, business-friendly environment that doesn’t stifle progress.
It’s the hub of the government’s cutting-edge research and for experts, stakeholders, and countries to collaborate on AI safety initiatives. It sets non-binding standards for AI safety, ensuring that everybody in the industry is on the same page. It provides technical guidance, helping AI companies protect their intellectual property and AI model weights from theft.
And perhaps most crucially, the Institute is the only organization that actively evaluates next-generation AI models prior to deployment. This access goes far beyond the results of certain safety tests that I mentioned above: before AISI’s deal with cutting-edge AI companies, there was “only one recent instance of a lab giving access to an [external] evaluator pre-deployment.” But now researchers have unprecedented access to developing AI technology. This new level of access is a game-changer — akin to the difference between evaluating a car prototype on its crash test data and driving itself yourself.
Regardless of where you fall on the spectrum between “superintelligent AI will turn us all into paperclips” and “AI is just math,” there are strong reasons to support the AISI. If you care about anti-conservative bias, anti-Black bias, or AI being used to create bioweapons, letting outside experts evaluate something so consequential is just common sense.
Dissolving AISI means losing pre-deployment access. It means risk blindly releasing potentially harmful AI systems into society. This will undermine public trust in new AI technologies and safety — to say nothing of more direct harms.
So what can we do about it?
Leaving aside your vote in November, there are two bipartisan bills that need your urgent support. There’s one in the House and one in the Senate, and each would codify AISI into law, protecting it from removal via executive order. If you think the AISI is valuable, write to your congressman. Call your Senator. Do your part to pass these bills.
The time is now. The AI landscape is evolving rapidly, and without the AISI, the U.S.’s safety efforts will fall behind. Something we — and the rest of the world — can’t afford.
If the above bills don’t pass, the future of the AI Safety Institute will be on the presidential ballot.
We can either lead in AI safety or be led by countries who may not share our values or priorities.
Let’s choose leadership. Let’s choose safety. Let’s choose to keep the AI Safety Institute intact and empower it to secure our AI-driven future.