Listen to this article now

In a significant move, President Biden unveiled a comprehensive executive order on Monday aimed at enhancing the safety of AI systems within the federal government. AI companies have long pledged to develop large language models (LLMs) responsibly and safeguard their products against cyber threats. Biden’s extensive 100-page executive order formalizes the government’s expectations for AI developers and addresses the most immediate security concerns.

This executive order represents a commendable effort to align the federal government with the rapid advancements in AI research. It takes a pragmatic approach to safety, focusing on mitigating security threats. These threats include malicious actors exploiting AI to generate disinformation, which could jeopardize national security, accelerate the production of biological weapons, or, more practically, facilitate discrimination against certain groups.

The executive order mandates that tech companies developing large AI models regularly report safety testing details to the government. It also tasks the National Institute of Standards and Technology with crafting the government’s set of safety evaluations. Furthermore, various government agencies, such as the Departments of Homeland Security, Energy, and Commerce, are instructed to address AI risks within their jurisdictions. The Department of Homeland Security, for instance, is directed to assess the potential threats posed by adversarial AI systems. Additionally, the executive order encourages government agencies to recruit new AI talent to assist in these efforts.

The impact of this order on major AI laboratories remains uncertain. Many leading tech companies have already committed to responsible AI development and allocate research and development funds for safety measures. As government involvement increases, the challenges related to oversight and enforcement may become more apparent. It’s questionable whether the government can effectively penetrate the secrecy surrounding the largest AI companies, reveal poor safety practices, and ensure transparency. Furthermore, the executive order’s legal authority may be limited unless Congress passes comprehensive tech regulations, which currently appears unlikely.


A closer look at the AI landscape reveals that a handful of research labs dominate the field. Some are integrated within companies like Meta, while others are independent startups. These specialized labs produce advancements and breakthroughs that prove challenging for even wealthy tech giants to replicate. Microsoft, for example, invested a staggering $10 billion in OpenAI, acknowledging the difficulty of competing with its language models despite its own extensive natural language research efforts.

Microsoft is not alone in recognizing the value of smaller research labs. Google, a company deeply committed to AI, is following suit by investing $2 billion in Anthropic, as reported by The Wall Street Journal. Amazon recently injected $4 billion into an AI startup. Additionally, The Information reported that AI startup Mistral, often referred to as “Europe’s OpenAI,” is actively seeking to raise $300 million, just months after a $113 million seed round. Tech giants are also making smaller investments, such as Microsoft in Inflection AI, Oracle in Cohere, and Google and Samsung in Israeli AI lab AI21. This trend of major deals underscores the intensifying AI arms race in Silicon Valley.


A central question that arises is whether a generative AI system fundamentally transforms the operation of business or personal applications in ways that are quantifiable in terms of time, cost, and quality of life. While such transformative tools do exist, they currently fall short in terms of performance and security to have a truly groundbreaking impact.

Mike Volpi, a partner at Index Ventures specializing in AI investments, argues that generative AI holds more promise than crypto and blockchain and will ultimately yield significant benefits. Although LLMs and ChatGPT have demonstrated substantial potential, the tech industry frequently requires years for these promises to materialize. Volpi draws a parallel to the internet’s emergence in the mid-’90s, which took time to reshape business and life following an internet bubble at the decade’s end.

The primary challenge, Volpi notes, is the “knowledge gap” between generative AI models, often obtained via APIs by major enterprises, and the end-user application. In practice, an LLM serves as just one component of a functioning AI system, and AI must collaboratively process private, proprietary datasets securely before delivering value through an app. Many Fortune 500 companies are still navigating this complex landscape. According to an anonymous source, generative AI’s current “killer app” within large enterprises is enterprise search—a critical function for retrieving and recalling corporate knowledge when needed.