The Trump Administration solicited proposals for its AI Action Plan, and a bunch of organizations submitted their responses. I tried to read as many of them as possible, and there are also great summaries from Just Security and CSET (which also submitted its own response). Seriously, if you want more depth, read Just Security’s.
Regardless, my own notes:
BLUF — what (almost) everyone agrees on
- Energy & Infrastructure: Nearly all groups emphasized the urgent need to significantly expand power generation and streamline permitting for AI infrastructure, with Anthropic calling for 50 additional gigawatts by 2027, OpenAI proposing “AI Economic Zones,” and IFP recommending “Special Compute Zones.”
- Strengthening AISI: Multiple organizations (Anthropic, CSET, Google) support maintaining and strengthening the AI Safety Institute for security testing and evaluation, particularly for pre-deployment assessment of frontier models’ capabilities.
- Export Controls: There’s strong consensus around improving export control enforcement, especially preventing chip smuggling, though approaches vary from Anthropic’s government-to-government agreements to IFP’s conditional controls to CNAS’s quarterly reviews.
- Talent Immigration: CNAS and Google strongly emphasized immigration reforms to attract and retain top AI talent, with CNAS specifically recommending adding AI positions to the Schedule A list for expedited visa processing.
- Open-Source Strategy: CSET and CDT advocate for supporting open-source AI development, while CNAS highlights the need to counter China’s open-source strategy by offering democratic alternatives to models like DeepSeek.
Responses
- CSET strongly supports open-source AI models to foster broader innovation > The U.S. government should support the release of open-source AI models, datasets, and tools that can be used to fuel U.S. AI development, innovation, and economic growth. Open-source models and tools enable greater participation in the AI domain.
- They emphasize improving workforce development for AI talent > The U.S. government should increase funding for the federal National Apprenticeship system, with an emphasis on technical occupations and industry intermediaries.
- They recommend building robust AI security testing and evaluation > Empower AISI to develop quantitative benchmarks for AI, including benchmarks that test a model’s resistance to jailbreaks, usefulness for making CBRN weapons, and capacity for deception.
- They call for smarter, more evidence-based export controls > “BIS should institute scenario planning assessments before implementing new export controls and rigorously monitor the effectiveness of current export control policies.”
- They push for mandatory incident reporting for government AI systems > “Implement a mandatory AI incident reporting regime for sensitive applications across federal agencies.”
- OpenAI warns that DeepSeek demonstrates America’s narrowing AI lead > DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI.
- They advocate for massive investment in AI infrastructure > Hundreds of billions of dollars in global funds are waiting to be invested in AI infrastructure. If the US doesn’t move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the CCP.
- They propose creating special economic zones for AI development > The U.S. government should also institute ‘AI Economic Zones’ that speed up permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors.
- They emphasize the importance of fair use for AI training data > Applying the fair use doctrine to AI is not only a matter of American competitiveness — it’s a matter of national security.
- They highlight the lack of AI adoption in government > AI adoption in federal departments and agencies remains unacceptably low, with federal employees, and especially national security sector employees, largely unable to harness the benefits of the technology.
- Google calls for a unified national approach to AI regulation > The Administration should ensure that the U.S. avoids a fragmented regulatory environment that would slow the development of AI, including by supporting federal preemption of state-level laws that affect frontier AI models.
- They emphasize role-based responsibility in the AI ecosystem > The actor with the most control over a specific step in the AI lifecycle should bear responsibility (and any associated liability) for that step.
- They stress the need for energy infrastructure for AI > The U.S. government should adopt policies that ensure the availability of energy for data centers and other growing business applications that are powering the growth of the American economy.
- They advocate for aligned international standards > We encourage the Department of Commerce, and the National Institute of Standards and Technology (NIST) in particular, to continue its engagement on standards and critical frontier security work. Aligning policy with existing, globally recognized standards, such as ISO 42001, will help ensure consistency and predictability across industry.
- They support strong but balanced export controls > AI export rules imposed under the previous Administration (including the recent Interim Final Rule on AI Diffusion) may undermine economic competitiveness goals the current Administration has set by imposing disproportionate burdens on U.S. cloud service providers.
- Anthropic proposes an ambitious national energy target for AI > The federal government should consider establishing an ambitious national target: build 50 additional gigawatts of power dedicated to the AI industry by 2027.
- They warn about developing model capabilities that present biosecurity risks > Claude 3.7 Sonnet demonstrates concerning improvements in its capacity to support aspects of biological weapons development—insights we uncovered through our internal testing protocols and validated through voluntary security exercises conducted in partnership with the U.S. and U.K. AI Safety and Security Institutes.
- They advocate for systematic AI adoption throughout government > We propose an ambitious initiative: across the whole of government, the Administration should systematically identify every instance where federal employees process text, images, audio, or video data, and augment these workflows with appropriate AI systems.
- They recommend strengthening the AI Safety Institute for security testing > Preserve the AI Safety Institute in the Department of Commerce and build on the MOUs it has signed with U.S. AI companies—including Anthropic—to advance the state of the art in third-party testing of AI systems for national security risks.
- They support stronger export controls and anti-smuggling measures > The U.S. government should require countries to sign government-to-government agreements outlining measures to prevent smuggling. As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies.
- IFP recommends establishing “Special Compute Zones” for rapid AI infrastructure deployment > We propose that the federal government establish ‘Special Compute Zones’ — regions of the country where AI clusters at least 5 GW in size can be rapidly built through coordinated federal and private action.
- They call for prize competitions to boost American open-source AI
Prize competitions have a long history of spurring major innovations… federal agencies should launch prize competitions to incentivize the development of open-source AI models for a wide range of new scientific applications.
- They advocate for interpretability research through a “grand challenge” approach > A large-scale initiative — comparable in ambition to the Human Genome Project — could be instrumental in systematically mapping how today’s AI models process information to exhibit particular capabilities.
- They recommend overhauling chips export controls to prevent smuggling > To strengthen LPP while minimizing burdens on the US semiconductor industry, BIS could define ‘Restricted LPP Destinations’ within LPP, consisting of countries suspected of being AI chip smuggling hotspots, and substantially lower the unconditional annual export cap to firms in these countries.
- They propose dramatic increases in BIS funding and enforcement capacity > Given the importance that advanced technology already has to national competitiveness and security, properly funding and modernizing BIS should be a top priority of this administration.
- CNAS urges aggressive immigration reforms to attract AI talent > To leverage America’s talent advantage once more, the U.S. government should add high-demand AI jobs with demonstrated shortages to the Schedule A list… Employers in Schedule A categories can hire foreign talent while bypassing cumbersome recruitment and labor certifications requirements, filling critical roles more expeditiously.
- They call for countering China’s open-source model strategy > DeepSeek-R1 demonstrates China’s success in projecting cost-effective, open source AI leadership to the world despite embedding authoritarian values in its AI. The United States can counter this strategy by rapidly releasing modified versions of leading open source Chinese models that strip away hidden censorship mechanisms.
- They recommend reforming the US-China AI Working Group for risk reduction > The Trump administration’s new AI Action Plan should reframe this group as a technical expert body to tackle shared AI risks and reduce tensions without undermining America’s AI lead.
- They emphasize securing AI critical infrastructure > AI datacenters and companies will increasingly become attractive targets for adversarial nations seeking to steal advanced models or sabotage critical systems. The private sector alone is neither equipped nor incentivized to effectively counter sophisticated state actors.
- They propose quarterly export control reviews for better adaptability > The current approach of annual export control updates fails to keep pace with rapid technological change in AI and emerging new evidence. The Bureau of Industry and Security (BIS) should instead adopt a quarterly review process with the authority to make targeted adjustments to controls as new capabilities emerge.
- CDT warns against restricting open-source AI development > The AI Action Plan should set a course that ensures America remains a home for open model development… Restricting open model development now would not improve public safety or further national security — rather, it would sacrifice the considerable benefits associated with open models and cede leadership in the open model ecosystem to foreign adversaries.
- They emphasize the need for independent oversight of AI in national security > The AI Action Plan should recognize that independent external oversight is also critically important to promote safe, trustworthy, and efficient use of AI in the national security/intelligence arena.
- They caution against rushing government AI adoption > CDT’s proposal cautions against the government rushing forward on AI adoption, claiming that it could lead to wasted tax dollars on ineffective, snake oil AI tools.
- They call for full transparency around government AI use > The AI Action Plan can develop public trust in federal government’s use of AI by building on agencies’ existing use case inventories – a key channel for the public to learn information about how agencies are using and governing AI systems.
- They stress the importance of holistic AI risk assessment > The standards-development process should center not only the prospective security risks arising from capabilities related to chemical, biological, and radiological weapons and dual-use foundation models, but also the current, ongoing risks of AI such as privacy harms, ineffectiveness of the system, lack of fitness for purpose, and discrimination.