The AI Executive Order: An Overview
And what it means for education, the workforce, and the edtech startup ecosystem
Biden’s AI Executive Order Explained
The Impact on the Education and Workforce Innovation Ecosystem
Last Monday, President Joe Biden enacted a sweeping executive order (EO) aimed at the oversight of artificial intelligence, describing it as "unprecedented in its significance for AI safety, security, and trustworthiness among any governmental measures globally”. This announcement came ahead of the global AI summit in the UK this week. It seems AI's watershed moment has arrived in Washington.
Spanning 100+ pages, the order lays the groundwork for how the government will regulate the AI field and encompasses numerous goals, including:
Combating algorithmic bias and discrimination
Developing guidance for content authentication and watermarking to reduce misinformation
Lowering barriers for AI expertise immigration into the country
Testing foundation models against three big threats: weapons of mass destruction, cybersecurity, and “evasion of human control”
Establishing a regulatory structure for infrastructure-as-a-service providers to oversee and report on foreign companies and customers
Recruiting more AI experts into government and creating a raft of new offices and task forces - from health care to education, trade to housing, and more
Shaping AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools
Investing in workforce training and development, strengthening federal support for workers facing labor disruptions, and maximizing the benefits of AI for all workers
These new rules are based on the Defense Production Act and the International Emergency Economic Powers Act, both of which were intended to give the President broad emergency powers during a war or other international crisis. Ultimately, the EO signifies that the U.S. government is taking AI safety seriously and is an important step in the right direction. However, the provisions within the executive order “lack teeth'“ - they ultimately don’t hold any real enforcement power at this point and remain contingent upon voluntary compliance. Although there is no clear “owner” of this document, it will indeed catalyze many government agencies to propose regulations or publish reports in response to the order over the next year. A TL;DR from Bilawal on key actions we’ll see across federal agencies here.
So what does this all mean for Education, the Future of Work and the Startup Ecosystem?
Let’s unpack a few key provisions that may have an immediate or near-term impact on our ecosystem:
Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools
The Secretary of Education will have to develop resources, policies, and guidance regarding AI within 365 days of the order. This includes the development of an “AI toolkit” for education leaders.
For edtech startups building AI tools, federal grant funding may potentially complement VC and other traditional sources as a viable source of financing.
Beyond the prospects of grant funding, public schools and universities may demonstrate an increased willingness to invest in and integrate AI-enabled resources (which we are already seeing regardless!). This shift could lead to broader adoption and integration of such technologies in educational settings, as institutions seek to enhance learning experiences, streamline administrative tasks, and adopt innovative tools.
Some exciting companies already working on AI-enabled tools that amplify educators: MagicSchool.ai, Khanmigo (by Khan Academy), Kyron Learning, Curipod, SchoolAI, Quizizz, Class Companion and many others.
Strengthen or develop additional Federal support for workers displaced by AI and strengthen and expand education and training opportunities that provide individuals pathways to occupations related to AI.
According to a McKinsey report on Generative AI and the Future of Work, “by 2030, activities that account for up to 30% of hours currently worked across the US economy could be automated” and “an additional 12M occupational transitions may be needed by 2030”.
“The average half-life of skills is now less than five years, and in some tech fields it’s as low as two and a half years. For millions of workers, upskilling alone won’t be enough.” Reskilling will therefore become a core part of employee value proposition and a strategic means of balancing workforce supply and demand.
Some exciting companies already working on this: Uplimit, Workera, Degreed, Eightfold.ai, Gloat, Multiverse, Springboard, and more.
Companies developing any foundation model that post a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests
(“Red-team testing” = when experts act like hackers to test a company's defenses, helping to find weaknesses by simulating real cyberattacks)
The definitions of “risk” are vague here - Who decides what poses a serious risk? Does this apply to consumer products? How and with whom will the red-team safety tests results be shared with?
Some point to the lack of focus on ways to train and develop models to minimize future harms, before an AI system is deployed. “There was a call for an overall focus on applying red-teaming, but not other more critical approaches to evaluation…‘Red-teaming’ is a post-hoc, hindsight approach to evaluation that works a bit like whack-a-mole” - Margaret Mitchell (Researcher and Chief Ethics Scientist, Hugging Face)
Further questions: How will these rules be applied to open-source models? Will every fine-tuned version of any open-source model have to pass these tests? How does this get enforced? The framework assumes that a model will be created by a single company that can oversee training, testing and reporting.
Establish standards and best practices for detecting AI-generated content
There are currently no requirements for companies to disclose if content on their website or if chatbot conversations are AI-generated. The notion of mandatory labeling presents both opportunities and challenges. On one hand, if labeling AI-generated content becomes common practice, this will greatly reduce potential misinformation among users and increase transparency. On the other hand, implementation of this can become burdensome as content becomes increasingly hybrid (eg. AI-generated then edited by a human then rephrased by an AI) and therefore more complex to differentiate. Some have questioned how we define “AI-generated content” when almost all music, video, image content often undergoes some form of post-production digital rendering (will Pixar movies be watermarked?)
The Commerce Department will likely be in charge of developing standards for digital watermarks and other means to establish content authenticity.
Earlier this month, the second-largest teacher's union in the U.S, American Federation of Teachers, partnered with AI identification platform, GPTZero. GPTZero makes tools that can identify ChatGPT and other AI-generated content, to help educators rein in, or at least keep tabs on students' reliance on the new tech.
Ethan Mollick believes that teachers should be wary of AI detection tools. Others argue that the point of detection tools is moot - even if it were successful today, it would only be a matter of time before it becomes indistinguishable. In fact, OpenAI discontinued its AI writing detector this past July due to “low rate of accuracy”.
New safety requirements based on computing power
The regulations apply to models trained with more than 10^26 floating-point operations (“flops” = a metric used to measure a computer's performance). This threshold exceeds the computational resources needed for current leading-edge models like GPT-4.
However, given the exponential growth in computing power, these requirements will likely be relevant to the next generation of AI models from providers such as OpenAI, Google, and Anthropic. Organizations that develop these foundation models will have to conduct red-team safety evaluations, and provide routine updates to the federal government detailing their security measures against both physical and cyber threats.
Some believe that this will create a regulatory capture exercise and a compliance regime for any companies training models above the threshold. It could mean a massive amount of work for lawyers, compliance officers, data scientists and researchers. problematic in fundamental ways.
Andrew Ng, former co-founder of Google Brain and Coursera, sees the focus on foundation models as problematic in fundamental ways: “burdening basic technology development with reporting and standards places a drag on innovation. It makes more sense to regulate applications that carry known risks, such as underwriting tools, healthcare devices, and autonomous vehicles.”
Others argue that the EO is largely just a set of report requests for government employees and for “foundation models and data centers over the limits, who can very much bear the burden of a new filing”.
Establishes a new framework for regulating foreigners who try to train powerful models using US cloud services
This requires infrastructure-as-a-service providers (e.g., cloud providers like Amazon, Google and Microsoft) to monitor and report “foreign persons” involved in and potentially other information about “any training run of an AI model meeting the [threshold] criteria”. This may mean restrictions on foreign access to compute or advanced models.
The mandate for reporting doesn't appear to be confined to major cloud service providers, indicating that many U.S. businesses that serve foreign customers may find themselves facing new reporting duties.
Help agencies acquire specified AI products and services and accelerate the rapid hiring of AI professionals
AI.gov, the official website of the US government is also focusing on developing and bringing AI talent to the US, including streamlining the visa process. You can join the National AI Talent Surge here.
The Biden-Harris Administration’s resources to help U.S. students and workers prepare for and enter careers in AI and related fields, as well as to support educators and institutions navigating the field.
If you are building solutions in this space or have any thoughts on how this might further shape the education innovation ecosystem, we would love to discuss.
We’ll also be exploring these topics at the AIR (AI Revolution) Show (the world's first EdTech festival for AI Revolutionaries, free for attendees) and the ASU+GSV Summit in San Diego this April.
Read more:
Decoding the White House AI Executive Order’s Achievements (Stanford Institute for Human-Centered AI (HAI))
Regulating AI by Executive Order is the Real AI Risk (Steven Sinofsky, Hardcore Software)
What the executive order means for openness in AI (Arvind Narayanan and Sayash Kapoor, AI Snake Oil)
Biden seeks to rein in AI (Casey Newton, Platformer)
Saathi Tutor and Saathi Genie are two AI features on Class Saathi app. Both these features are loved by students and teachers respectively.