Biden’s Executive Order On AI Could Help $25 Billion Startup, Anthropic

4 mins read
19 views

Last month, the White House issued an executive order on AI — aiming to ensure the technology is “Safe, Secure, and Trustworthy.”

My guess is this executive order will create winners and losers among the public companies and startups competing for their slice of the multi-trillion dollar Generative AI pie.

The winners will be companies like Anthropic, the San Francisco-based provider of Claude 2 — a foundation model competing against OpenAI’s ChatGPT — under the banner of making AI that is “helpful, harmless and honest,” according to The New York Times

NYT
.

Unlike companies that had been hoping to shift Generative AI’s societal costs away from their investors, Anthropic’s mission is consistent with the values in the executive order.

Therefore, my guess is Anthropic will welcome the help from the executive order while rivals hoping to avoid government intrusion into their operations will view the order as an unwelcome anchor on their growth.

Executive Order On AI

On October 30, President Biden signed an executive order on artificial intelligence in the East Room of the White House. The Biden Administration diverged from its hands off policy towards many other technologies and made “the federal government a major player in the development and deployment of AI systems that can mimic or perhaps someday surpass the creative abilities of human beings.” according to the Boston Globe,

The EO does the following:

  • Initiates government safety monitoring. The EO asserts the government’s right to oversee the development of future AI systems to limit their risk to national security and public safety, Developers of such systems must notify the government when they begin building them and share the results of any safety tests they conduct on the AI systems, the Globe noted.
  • Sets new safety standards. The EO tasks government agencies with setting new standards for AI, “aimed at protecting privacy, fending off fraud, and ensuring that AI systems don’t reinforce human prejudices,” the Globe reported. In addition, “The Department of Commerce will set standards for ‘watermarking’ AI-generated images, text, and other content to prevent its use in fraudulent documents or ‘deep fake’ images,” the Globe reported.

This EO raises many questions:

  • What criteria will the government use to decide which future AI systems must comply with the standards?
  • Which government agencies will enforce the standards and monitor compliance with them?
  • Does the government have a sufficient number of trained people who can create the standards and assess whether AI companies are complying?
  • What penalties, if any, will the government impose on companies that do not comply with the EO?

Why The Executive Order Could Benefit Anthropic

Biden’s EO will help companies already taking action to protect society from the risks of Generative AI. The reason is if the EO is carried out with sufficient resources, it will help such companies realize their mission.

This comes to mind in considering Anthropic, the San Francisco-based provider of the foundation models used to build Generative AI chatbots. In 2021, Daniela Amodei and Dario Amodei, previously OpenAI executives, started Anthropic out of concern their employer cared more about commercialization than safety, according to Cerebral Valley.

Anthropic is a roaring success. By October 2023, the 192-employee company had raised a total of $7.2 billion valuing the company at $25 billion – five times more than Anthropic’s value in May.

With clients including Slack, Notion and Quora, Pitchbook forecasts Anthropic’s 2023 revenue will double to $200 million and The Information reported the company expects to reach $500 million in revenue by the end of 2024.

Caring About Customers And Communities

The key to Anthropic’s success is its co founders’ concern for making Generative AI safe for its customers and communities. Anthropic’s cofounders Dario Amodei, a Princeton-educated physicist who led the OpenAI teams that built GPT-2 and GPT-3, became Anthropic’s CEO. His younger sister, Daniela Amodei, who oversaw OpenAI’s policy and safety teams, became Anthropic’s president. As Daniela said, “We were the safety and policy leadership of OpenAI, and we just saw this vision for how we could train large language models and large generative models with safety at the forefront,” the Times wrote.

Anthropic’s co founders put their values into their product. The company’s Claude 2 – a rival to ChatGPT – could summarize larger documents and produced safer results. Claude 2 could summarize up to about 75,000 words – the length of a typical book, Users inputted large data sets and requested summaries in the form of a memo, letter or story. ChatGPT could handle a much smaller input of about 3,000 words, the Times reported.

Arthur AI, a machine learning monitoring platform, concluded Claude 2 had the most “self-awareness” – meaning it accurately assessed its knowledge limits and only answered questions for which it had training data to support, CNBC wrote.

Anthropic’s concern about safety caused the company not to release the first version of Claude — which the company developed in 2022 — because employees were afraid people might misuse it. Anthropic delayed the release of Claude 2 because the company’s red-teamers uncovered new ways it could become dangerous, according to the Times.

Using A Self-Correcting Constitution To Build Safer Generative AI

When the Amodeis started the company, they thought Anthropic would do safety research using other companies’ AI models. They soon concluded innovative research was only possible if they built their own models. To do that, they needed to raise hundreds of millions of dollars to afford the expensive computing equipment required to build the models. They decided Claude should be helpful, harmless and honest, the Times wrote.

To that end, Anthropic deployed Constitutional AI – the interaction between two AI models: one operating according to a written list of principles from sources such as the UN’s Universal Declaration of Human Rights and a second AI to evaluate how well the first one followed its principles — correcting it when necessary, the Times noted.

In July 2023, Amodei provided examples of Claude 2′s improvements over the prior version. Claude 2 scored 76.5% on the Bar exam’s multiple choice section, up from the earlier version’s 73%. The newest model scored 71 percent on the Python coding test, up from the prior version’s 56%. Amodai said Claude 2 “was twice as good at giving harmless responses,” CNBC wrote.

Because Anthropic’s product is built to bring helpful, harmless and honest values to Generative AI, society could be better off — perhaps with help from Biden’s EO.

Read the full article here

Leave a Reply

Your email address will not be published.

Previous Story

This week’s personal loan rates lower for 5-year loans

Next Story

Electric Vehicles Shift (Quietly) Into High Gear

Latest from Markets