Thoughts on America’s AI Action Plan for 2025
Some of my thoughts from a careful reading of America’s AI Action Plan for 2025.
“To secure our future, we must harness the full power of American innovation.” Hmmm… maybe “harness” should be “unleash”, as in get out of the way of innovation and refrain from micro-managing and controlling it.
The web page introducing and summarizing the AI action plan:
The PDF for the AI action plan itself:
Three pillars
The AI action plan is basically divided into three main sections, one section for each of three pillars of the plan. The three pillars:
- Pillar I: Accelerate AI Innovation. The technology itself, the chips, the software, the models.
- Pillar II: Build American AI Infrastructure. Data centers and physical systems and communications.
- Pillar III: Lead in International AI Diplomacy and Security. Government stuff and international trade.
Loosely, the three pillars are referred to as innovation, infrastructure, and international diplomacy and security.
We need a fourth pillar: Applications
Most of the focus of the action plan is on the raw AI technology itself, rather than on how it is used, the applications.
I would suggest a fourth pillar, although I would prefer to place it between the second and third pillars:
- Pillar IV (or IIA): Applications for AI. Or applications that exploit AI.
We need standards and best practices for AI applications
We need standards and best practices for AI applications.
Whether that should be driven by NIST or an industry group can be debated.
I would suggest an industry group, in which government agencies can participate as well since the government is generally the largest customer for almost any technology.
Actually a combination of NIST and industry; we already have the concept of FIPS standards developed by NIST when there is no industry standard.
No mention of FIPS standards
Curiously, the action plan makes no mention of FIPS standards, Federal Information Processing Standards, which are developed and maintained by the National Institute of Standards and Technology (NIST).
The action plan does make frequent mention of standards, but too vaguely and loosely for a topic of such great and significant importance.
Unclear target audience
Any document needs to have a clearly defined audience. Especially for documents about technical matters. It’s unclear who the target audience is for the AI Action Plan. Is it:
- AI experts.
- Non-AI technical staff.
- Policy wonks.
- Senior technical managers.
- Senior non-technical managers.
- Members of congress.
- Congressional staff.
- Members of congressional committees with a role in AI policy.
- Staff of congressional committees with a role in AI policy.
Budgeting and resourcing
How much is all of this going to cost? And who’s going to pay for it?
How much of the cost is a one-time start-up cost? Over exactly how many years? Five? Fewer, or more than five years?
How much of it is a gradual ramp-up and roll-out cost over a decade or more?
How much is a regular service and maintenance cost on an annual basis, much as the Internet, email, and Web services are today?
In aggregate, how much of the federal budget should be earmarked for AI?
We need an AI Czar to oversee it all
Do we need a single, dedicated AI Czar to oversee all AI efforts of the U.S. Government?
Or can all of this be done via the interagency process?
I do think that an AI Czar with both technical and non-technical staff is needed to drive AI in the USG and the US overall, and to make tough calls and resolve conflicts.
Start using the term and giving special visibility to them.
Need for a GSA AI Deployment Czar
Innovating AI in the USG will take a very special type of person and dedicated staff, but once the technical innovation is complete, GSA will need a team that enables and facilitates the deployment, rollout, training, and support for AI capabilities. And a senior leader (Czar) to make tough calls and resolve conflicts. So a dedicated AI Deployment Czar is needed.
Need for a GSA AI Inspector General
Once AI technical innovation is complete and ready for deployment, GSA needs an IG to investigate problems with the use of AI in the USG.
U.S. Government as the largest customer for AI
As with many or most technologies, the U.S. Government (USG) is generally the largest customer for any technology. And that can be expected and presumed to be true for AI technology and AI systems.
This gives USG a lot of clout and opportunity to shape how new technologies are designed, implemented, deployed, and governed.
The action plan should make a big deal about this and make clear how that clout and opportunity can be exploited for maximal effect.
Off the shelf vs. custom, with divergent needs
A lot of technology, whether it is hardware, software, or applications can be acquired off the shelf and immediately put into use with no changes required, such as IT applications and personal productivity software, including office applications, but it is also not uncommon for some significant degree of customization, or even full-custom bespoke technology or applications to be required.
Even if the latter, bespoke technology or applications for AI, is legitimately required, it tends to come at a very high and very steep cost. It can cost a lot more to develop as well as costing a lot more to maintain.
On the flip side, sometimes more-generalized or universal technology can cost a lot more to design, build, and maintain than specialized technology for specific, narrow customers which doesn’t have to meet the needs of a much broader and diverse audience or customer base.
The action plan needs to call out this distinction and note that additional, specialized research and support services are needed to support these two divergent approaches.
Risks
The action plan needs a deeper assessment of risks, not the technical risks of AI technology per se, but the risks to the USG, the country, and society at large. Such as:
- Moving too soon.
- Moving too late.
- Moving too fast.
- Moving too slow.
- Not doing enough research before deployment.
- Doing too much research before deployment.
- National security risks for development of the technology.
- National security risks for deployed AI systems.
- Inadequate talent.
- Misaligned talent.
- Excessive talent.
- Inadequate automation.
- Excessive automation.
- Whether artificial general intelligence (AGI) presents additional technical and social risks beyond current AI technology.
- Whether superintelligence presents additional technical social risks beyond both current AI technology and AGI.
Unclear uncertainty ahead and where the real difficulties and challenges lie
Although AI certainly has matured a lot in recent years, it’s not so clear how much uncertainty lies ahead. Factors such as:
- How much of current AI technology is simply prototype vs. production quality.
- How much big-leap innovation will be needed as opposed to modest incremental evolution once we get to the point where we feel that an initial full-scale deployment is warranted.
- What types of hardware advances might be expected in the coming years.
- What types of non-AI software advances might be expected in the coming years.
- How might energy requirements evolve in the coming years and decades.
- How might the financial costs evolve in the coming years and decades.
- How might our competitive position relative to peers, partners, allies, competitors, and adversaries evolve. How often might we expect to be ahead, and how often might we expect to be behind.
Is Artificial General Intelligence (AGI) covered by this plan or not?
As powerful as current AI technology is, there is a lot of debate about so-called artificial general intelligence (AGI), true human-level intelligence. Some AGI-like capabilities are available today, but nothing on the scale of full AGI.
At a minimum, a lot more research is required to achieve production-quality full-scale AGI.
Is it the intent that this action plan covers that research, all of it, or just some of it? Or is much of AGI beyond the scope of this current AI action plan?
The plan should at least say something about AGI.
Even at this early stage, there should probably be a fairly robust research effort in AGI authorized by this action plan. Academic, public, and private.
Is superintelligence covered by this plan or not?
Even beyond artificial general intelligence (AGI) lies superintelligence, with machine intelligence capabilities well beyond the intelligence of even the brightest humans.
Most of superintelligence is merely speculative at this stage, not even ready for any full-scale research effort, let alone any deployment.
The plan should at least say something about superintelligence.
There should probably be at least some rudimentary, blue-sky research program for superintelligence authorized under this plan.
Is quantum computing covered by this plan or not?
There are a lot of people pursuing research focused on harnessing the raw power of quantum computers to implement AI capabilities. Although much of these efforts are still fairly primitive, with useful results far beyond the far horizon at this stage.
The plan should say at least something about such efforts, even if there is no risk of near-term deployment.
There should be some explicit research funding programs for quantum computing that is focused on AI itself, rather than simply quantum computing in general.
An overall AI Czar and their technical team should be charged with assuring that this area is getting an appropriate level of attention and funding to assure that the U.S. takes the leadership role in this area.
Regulation
A significant aspect of the plan is a focus on regulation.
A lot of the intention is to eliminate regulations which impede, slow or otherwise interfere with the rapid development or deployment and utilization of AI. in vernacular parlance, eliminating red tape.
There is a special focus on eliminating ideological bias in regulation. There is a determined effort to make sure that AI in America is not “woke.”
A number of provisions in the plan actually increase regulation, such as to monitor and assure that we don’t use foreign AI technology and that foreign countries of concern do not get access to American AI technology. Severe export controls as well, although very open to our allies.
Compatibility with EU AI regulation
Domestic regulation of AI in the U.S. itself will be difficult enough, but achieving some degree of compatibility with the divergent vision and value of EU AI regulation will be a sticky wicket indeed. The action plan doesn’t do justice to the difficulty of this proposition.
Interagency
Quite a few provisions of the plan require significant cooperation between multiple agencies. In Washington-speak this is known as the interagency. In vernacular parlance it means lots of meetings and negotiations between agencies, and… more red tape.
Agencies
The plan speaks a lot about a number of agencies and organizations within the U.S. government, but the focus is on agencies who will be involved in some way in the development and promotion — and regulation and control — of AI, not the mere use of AI.
Use of AI by USG agencies
Eventually, the presumption is that all agencies of the USG will deploy and use AI capabilities. It will become as ubiquitous as email, web browsing, and web-based applications and services.
The action plan doesn’t talk about this enough. What an AI-powered USG would look and feel like.
Key metrics are needed.
Use of AI by the military and intelligence community
Although all government agencies will eventually deploy and use AI capabilities, the Department of Defense and the many agencies of the Intelligence Community warrant special attention, as they will be some of the earliest and most advanced users of AI capabilities.
The action plan doesn’t talk about this enough. What an AI-powered military and intelligence community would look and feel like.
Key metrics are needed, which may be significantly more intensive and specialized than used in the rest of government.
[When do they expect that SkyNet will be turned on?!]
No role for the National Institute of Health (NIH) or the Department of Health and Human Services (HHS)?
Really? The action plan has no mention of any role for the National Institute of Health (NIH) or the Department of Health and Human Services (HHS), or the Centers for Medicare & Medicaid Services (CMS). Granted, these agencies are users of AI technology rather than developers or innovators of raw AI technology, but since they will be critical users of AI technology and develop applications that others will use, it seems as if they should have some role in the action plan!
To be clear, software applications that are driven by AI technology should be treated as a critical national resource, worthy of special attention, protection, and exploitation.
U.S. government agencies and groups with a focus on AI
For reference. Otherwise known as the alphabet soup of the U.S. government.
- BEA. Bureau of Economic Analysis. Part of the Department of Commerce.
- BIS. Bureau of Industry and Security. Part of the Department of Commerce.
- BLS. Bureau of Labor Statistics. Part of the Department of Labor.
- CAIOC. Chief Artificial Intelligence Officer Council. Under OMB.
- CAISI. Center for AI Standards and Innovation. Within NIST and Commerce.
- Census Bureau. Bureau of the Census. Part of the Department of Commerce.
- CERCLA. Comprehensive Environmental Response, Compensation, and Liability Act.
- DFC. U.S. International Development Finance Corporation.
- DHS. Department of Homeland Security.
- DOC. Department of Commerce. Includes NIST.
- DOD. Department of Defense. The Pentagon.
- DOE. Department of Energy. Includes the National Labs (Laboratories).
- DOI. Department of Interior.
- DOL. Department of Labor.
- DOS. Department of State.
- ED. Department of Education. DOE is the Department of Energy.
- EXIM. Export-Import Bank.
- FCC. Federal Communications Commission.
- FDA. Food and Drug Administration.
- FTC. Federal Trade Commission.
- GSA. General Services Administration.
- HHS. Department of Health and Human Services.
- IC. Intelligence Community. All 18 agencies and organizations within agencies.
- IRS. Internal Revenue Service. Part of the Department of the Treasury.
- NEDC. National Energy Dominance Council.
- NEPA. National Environmental Policy Act.
- NIH. National Institute of Health. Part of the Department of Health and Human Services.
- NIST. National Institute of Standards and Technology. Part of DOC.
- NSC. National Security Council. Part of the White House Executive Office of the President.
- NSF. National Science Foundation.
- NSTC. White House National Science and Technology Council.
- NTIA. National Telecommunications and Information Administration. Bureau of DOC.
- ODNI. White House Office of the Director of National Intelligence. Oversees the entire IC.
- OMB. White House Office of Management and Budget.
- ONCD. White House Office of the National Cyber Director.
- OSTP. White House Office of Science and Technology Policy.
- SEC. Securities and Exchange Commission.
- SBIR. Small Business Innovation Research program.
- USCB. Census Bureau or Bureau of the Census. Part of the Department of Commerce.
- USDA. Department of Agriculture.
- USDT. UST. Treasury. Department of the Treasury.
- USG. U.S. Government.
- USTDA. U.S. Trade and Development Agency.
Glossary needed
What’s needed is not so much technically-accurate definitions for tech experts, but the implications, benefits, challenges, relevance, and role of each concept.
- Advanced AI compute.
- Advanced HVAC technicians.
- Advanced Technology Transfer and Capability Sharing Program.
- Adversarial example attacks.
- Adversaries. And who might they be? Criteria?
- AI Assurance.
- AI compute. Advanced AI compute.
- AI evaluation initiatives at NIST.
- AI evaluations. Just performance and reliability, not function and capabilities? Only for regulated industries?
- AI global alliance.
- AI Incident Response.
- AI Information Sharing and Analysis Center (AI-ISAC).
- AI infrastructure occupations.
- AI Interpretability. Explainability? Distinct, with no mention of the explainability problem. Focuses solely on national security but not other domains where… “where lives are at stake”, such as healthcare, transportation systems, or for domains such as financial transactions and data processing where transactional and data integrity are critical.
- AI models.
- AI protection systems.
- AI protection systems and export controls.
- AI-related discretionary funding programs.
- AI Risk Management Framework. NIST AI Risk Management Framework.
- AI roles, critical AI roles.
- AI-security threat information and intelligence.
- AI-specific vulnerabilities and threats.
- AI skill development.
- AI skills.
- AI system.
- AI tech stack.
- AI vulnerabilities.
- AI vulnerability information.
- AI Workforce Research Hub.
- AI workloads
- Allies. And who might they be? Criteria?
- American AI.
- American values. We should call out all of them that have some relevance here.
- Authoritarian influence.
- Backfill. Of U.S. export controls.
- Bureau of Industry and Security Export Control.
- Career and technical education (CTE).
- Chief Artificial Intelligence Officer Council (CAIOC).
- Chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons.
- CHIPS Act.
- Classified compute environments. Is this a type of data center, or does “environment” have some other meaning?
- Clean Air Act.
- Clean Water Act Section 404 permit for data centers.
- Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA).
- Concerning entities.
- Cooperative Ecosystem Studies Units (CESU).
- Countries of concern.
- Customer verification procedures.
- Critical AI roles.
- Data poisoning.
- ED Office of Career, Technical, and Adult Education.
- Export controls.
- FAST-41 process. Fixing America’s Surface Transportation Act of 2015.
- Federal data.
- Federally Funded Research and Development Centers (FFRDC).
- Fixing America’s Surface Transportation Act of 2015.
- Fly-away kits.
- Focused-Research Organizations (FRO).
- Foreign adversaries. And who might they be?
- Foreign adversary technology.
- Foreign AI systems.
- The frontier. Where exactly is the frontier and what criteria distinguish it? Or is this not the leading edge technologies that we have, but referring to what’s to come that we are (hopefully) moving towards? How old can a technology be and still be considered to be on the frontier?
- Frontier AI. As opposed to legacy AI (not mentioned) and is there a middle ground between the two? Where does the focus of the plan begin? Or is all AI now considered frontier AI?
- Frontier AI developers.
- Frontier AI systems. All current, leading edge AI technology, or only the most leading edge?
- Frontier language models.
- Frontier models. What criteria distinguish them from other models.
- Full-stack AI export packages.
- Generative AI systems.
- Global AI competition.
- HVAC.
- Information and communications technology and services (ICTS).
- Key allies. And who might they be? What criteria?
- Large language model (LLM).
- Life on Federal lands.
- LLM, LLMs. Stay away from raw jargon, use large language models or just models or large-scale models.
- Location verification features.
- Malign foreign influence.
- Measuring and evaluating AI models. Is measurement not inherent in evaluation?
- Metrics. Like… what?
- Models. Yeah, everyone throws the term around, but what are they, really?
- Modern AI data center.
- National AI Research and Development (R&D) Strategic Plan.
- National laboratories. DOE versus non-DOE labs (DOD or other agencies.) Usually the term refers to the DOE labs, so be more explicit.
- NSF’s National Secure Data Service (NSDS).
- National security agencies. And who might they be? The IC, or broader?
- National security risks.
- National security risks in frontier models.
- National security-related AI evaluations.
- NIST AI Risk Management Framework.
- Novel national security risks.
- Nucleic acid sequence screening.
- Nucleic acid synthesis providers.
- Nucleic acid synthesis tools.
- Open-source AI models.
- Open-source and open-weight AI models.
- Open-weight AI models.
- Robust nucleic acid sequence screening.
- Performance. Generally, too vague a term. May or may not include function and capabilities, and resilience, in addition to speed, capacity, volume, and throughput rate.
- Registered Apprenticeships.
- Responsible AI.
- Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits.
- Restricted Federal data.
- Scalable and secure AI workloads.
- Secure-by-design.
- Secure-By-Design AI Technologies and Applications.
- Strategic adversaries.
- Technological frontier.
- Technology diplomacy strategic plan.
- Translational manufacturing technologies.
- U.S.-origin AI compute.
- United States Investment Accelerator.
To be continued
This document may be updated as thinking about it continues and evolves, and based on feedback.
For more of my writing: List of My Artificial Intelligence (AI) Papers.
