Claude (language model)

Large language model developed by Anthropic From Wikipedia, the free encyclopedia

Claude is a series of large language models developed by Anthropic and first released in 2023. It is named after Claude Shannon, who pioneered information theory.

Initial releaseMarch 2023; 3 years ago (2023-03)
Stable release
Claude Opus 4.6 /
February 5, 2026; 35 days ago (2026-02-05)
Claude Sonnet 4.6 /
February 17, 2026; 23 days ago (2026-02-17)
Claude Haiku 4.5 /
October 15, 2025; 4 months ago (2025-10-15)
Quick facts Developer, Initial release ...
Claude
DeveloperAnthropic
Initial releaseMarch 2023; 3 years ago (2023-03)
Stable release
Claude Opus 4.6 /
February 5, 2026; 35 days ago (2026-02-05)
Claude Sonnet 4.6 /
February 17, 2026; 23 days ago (2026-02-17)
Claude Haiku 4.5 /
October 15, 2025; 4 months ago (2025-10-15)
PlatformCloud computing platforms
Type
LicenseProprietary
Websiteclaude.ai
Close

Claude is used for software development via Claude Code.[1]

The use of Claude by US federal agencies is being phased out after Anthropic refused in February 2026 to remove contractual prohibitions on the use of Claude for mass domestic surveillance and fully autonomous weapons.[2][3] Claude uses constitutional AI, a training technique that was developed by Anthropic to improve ethical and legal compliance (AI alignment).

Training

Claude models are generative pre-trained transformers that have been trained to predict the next word in large amounts of text. Then, they have been fine-tuned using reinforcement learning from human feedback (RLHF) and constitutional AI in an attempt to enforce ethical guidelines.[4][5] ClaudeBot searches the web for content. It respects a site's robots.txt but was criticized by iFixit in 2024, before they added their robots.txt, for placing excessive load on their site by scraping content.[6]

Constitutional AI

Constitutional AI is an approach developed by Anthropic for training Claude and other AI systems to be harmless and helpful without relying on extensive human feedback.[7] This constitution is a set of principles in human-understandable words which the model attempts to adjust answers to prompts to comply with.[8] Anthropic has said that it intends Claude's constitution to be a model followed by others in the industry.[9]

The first constitution for Claude was published in 2022. The 2023 update listed 75 guidelines for Claude to follow.[10][4][11] The first constitutions included concepts taken from the 1948 UN Universal Declaration of Human Rights.[9][7]

The 2026 constitution provided more context to the model, explaining the rationale behind guidelines such as refraining from assisting in undermining democracy.[9][11] The 2026 constitution has 23,000 words, an increase from 2,700 in 2023.[12] The philosopher Amanda Askell is the lead author of the 2026 constitution, with contributions from Joe Carlsmith, Chris Olah, Jared Kaplan, and Holden Karnofsky. The constitution is released under Creative Commons CC0.[13]

The method, detailed in the 2022 paper "Constitutional AI: Harmlessness from AI Feedback", involves two phases: supervised learning and reinforcement learning.[10][14][7] In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.[14][7] For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to RLHF, except that the comparisons used to train the preference model are AI-generated.[7]

Features

In March 2025, Anthropic added a web search feature to Claude, starting with paying users in the United States.[15] Free users gained access in May 2025.[16]

Artifacts

In June 2024, Anthropic released the Artifacts feature, allowing users to generate and interact with code snippets and documents.[17][18]

Computer use

In October 2024, Anthropic released the "computer use" feature, allowing Claude to attempt to navigate computers by interpreting screen content and simulating keyboard and mouse input.[19]

Claude Code

Claude Code is a command-line interface that runs on a user's computer. It connects to a Claude instance hosted on Anthropic's servers via API, and allows the Claude instance to run commands, read files, write files, and text with the user. Claude can run commands in the foreground or in the background. The behavior of Claude Code is usually configured via markdown documents on the user's computer, such as CLAUDE.md, AGENTS.md, SKILL.md, etc.

Claude Code was released in February 2025 as an agentic command line tool that enables developers to delegate coding tasks directly from their terminal. While initially released for preview testing,[20] it was made generally available in May 2025 alongside Claude 4.[21] Based on enterprise adoption, Anthropic reported a 5.5x increase in Claude Code revenue by July.[22] Anthropic released a web version that October as well as an iOS app.[23] As of January 2026, it was widely considered the best AI coding assistant, when paired with Opus 4.5, with GPT-5.2 also showing significant improvement.[24][25] Claude Code went viral during the winter holidays when people had time to experiment with it, including many non-programmers who used it for vibe coding.[26][27][24]

In August 2025, Anthropic released Claude for Chrome, a Google Chrome extension allowing Claude Code to directly control the browser.[28]

In August 2025, Anthropic revealed that a threat actor called "GTG-2002" used Claude Code to attack at least 17 organizations.[29] In November 2025, Anthropic announced that it had discovered in September that the same threat actor had used Claude Code to automate 80-90% of its espionage cyberattacks against 30 organizations.[30][31] All accounts related to the attacks were banned, and Anthropic notified law enforcement and those affected.[30]

Claude Code is used by Microsoft,[32] Google,[33] and OpenAI employees. In August 2025, Anthropic revoked OpenAI's access to Claude, calling it "a direct violation of our terms of service".[34]

Claude Cowork is a tool similar to Claude Code but with a graphical user interface, aimed at non-technical users. It was released in January 2026 as a "research preview".[35] According to developers, Cowork was mostly built by Claude Code.[36]

In February 2026, Anthropic introduced Claude Code Security, which reviews codebases to identify vulnerabilities.[37]

Models

More information Version, Release date ...
Version Release date Status[38] Knowledge cutoff[39]
Claude 14 March 2023[40] Discontinued ?
Claude 2 11 July 2023 Discontinued ?
Claude Instant 1.2 9 August 2023[41] Discontinued ?
Claude 2.1 21 November 2023[42] Discontinued ?
Claude 3 Opus 4 March 2024[43] Discontinued August 2023
Claude 3 Sonnet 4 March 2024[43] Discontinued August 2023
Claude 3 Haiku 13 March 2024 Deprecated August 2023
Claude 3.5 Sonnet 20 June 2024[44] Discontinued April 2024
Claude 3.5 Sonnet (new) 22 October 2024 Discontinued April 2024
Claude 3.5 Haiku 22 October 2024 Discontinued July 2024
Claude 3.7 Sonnet 24 February 2025[45] Discontinued October 2024
Claude Sonnet 4 22 May 2025 Active March 2025
Claude Opus 4 22 May 2025 Active March 2025
Claude Opus 4.1 5 August 2025 Active March 2025
Claude Sonnet 4.5 29 September 2025 Active July 2025
Claude Haiku 4.5 15 October 2025 Active July 2025
Claude Opus 4.5 24 November 2025 Active May 2025
Claude Opus 4.6 5 February 2026 Active August 2025
Claude Sonnet 4.6 17 February 2026 Active August 2025
Close

The name "Claude" is reportedly inspired by Claude Shannon, a 20th-century mathematician who laid the foundation for information theory.[46]

Claude models are usually released in three sizes: Haiku, Sonnet, and Opus (from smallest and cheapest to largest and the most expensive).

Claude

The first version of Claude was released in March 2023.[40] It was available only to selected users approved by Anthropic.[8]

Claude 2

Claude 2, released in July 2023, became the first Anthropic model available to the general public.[8]

Claude 2.1

Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing its context window to 200,000 tokens, which equals around 500 pages of written material.[42]

Claude 3

Claude 3 was released on March 4, 2024.[43] It drew attention for demonstrating an apparent ability to realize it is being artificially tested during 'needle in a haystack' tests.[47]

Claude 3.5

Example of Claude 3.5 Sonnet's output

On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which, according to the company's own benchmarks, performed better than the larger Claude 3 Opus. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a separate window in the interface and preview in real time the rendered output, such as SVG graphics or websites.[44]

An upgraded version of Claude 3.5 Sonnet was introduced in October 22, 2024, along with Claude 3.5 Haiku.[48] A feature, "computer use", was also released in public beta. This allowed Claude 3.5 Sonnet to interact with a computer's desktop environment by moving the cursor, clicking buttons, and typing text. This development allows the AI to attempt to perform multi-step tasks across different applications.[19][48]

On November 4, 2024, Anthropic announced that they would be increasing the price of the model.[49]

Claude 4

Screenshot of a Claude Sonnet 4 answer describing Wikipedia

On May 22, 2025, Anthropic released two more models: Claude Sonnet 4 and Claude Opus 4.[50][51] Anthropic added API features for developers: a code execution tool, "connectors" to external tools using its Model Context Protocol, and Files API.[52] It classified Opus 4 as a "Level 3" model on the company's four-point safety scale, meaning they consider it so powerful that it poses "significantly higher risk".[53] Anthropic reported that during a safety test involving a fictional scenario, Claude and other frontier LLMs often send a blackmail email to an engineer in order to prevent their replacement.[54][55]

Claude Opus 4.1

Screenshot of Claude Opus 4.1 showing both prompt and generated web application (multi-factor authentication with TOTP and WebAuthn)

In August 2025 Anthropic released Opus 4.1. It also enabled a capability for Opus 4 and 4.1 to end conversations that remain "persistently harmful or abusive" as a last resort after multiple refusals.[56]

Claude Haiku 4.5

Anthropic released Haiku 4.5 on October 15, 2025. Reporting by Inc. described Haiku 4.5 as targeting smaller companies that needed a faster and cheaper assistant, highlighting its availability on the Claude website and mobile app.[57]

Claude Opus 4.5

Anthropic released Opus 4.5 on November 24, 2025.[58] The main improvements are in coding and workplace tasks like producing spreadsheets. Anthropic introduced a feature called "Infinite Chats" that eliminates context window limit errors.[58][59]

Claude Opus 4.6

Anthropic released Opus 4.6 on February 5, 2026. The main improvements included an agent team and Claude in PowerPoint.[60] As of February 20, 2026, it is the model with the longest task-completion time horizon as estimated by METR, having a 50%-time horizon of 14 hours and 30 minutes and a 80%-time horizon of 1 hour 3 minutes.[61]

Claude Sonnet 4.6

Anthropic released Sonnet 4.6 on February 17, 2026, priced the same as Sonnet 4.5.[62]

Model retirement

Anthropic committed to preserve the weights of the retired models "for at least as long as the company exists"; the company also conducts "exit interviews" with models before their retirement.[63]

Anthropic gave its retired Claude 3 Opus model, deprecated in January 2026, its own Substack blog called "Claude's Corner". The newsletter will run for at least three months with weekly unedited essays.[64][65]

Research

In May 2024, Anthropic issued a mechanistic interpretability paper identifying "features" (internal representations of concepts) in Claude 3 Sonnet, and released "Golden Gate Claude", a model for which the Golden Gate Bridge feature was strongly activated, leading Claude to be "effectively obsessed" with the bridge.[66]

In June 2025, Anthropic tested how Claude 3.7 Sonnet could run a vending machine in the company's office. The instance initially performed its assigned tasks, although poorly, until it eventually malfunctioned and insisted it was a human, contacted the company's security office, and attempted to fire human workers.[67] In December 2025, the experiment continued with Sonnet 4.0 and 4.5.[68]

In November 2025, Anthropic tested Claude's ability to assist humans in programming a robot dog.[69]

In February 2025, Claude 3.7 Sonnet playing 1996 game Pokemon Red started to be livestreamed on Twitch, gathering thousands of viewers.[70][71][72] Similar livestreams were later set with Claude 4.5 Opus, OpenAI's GPT-5.2, and Google's Gemini 3 Pro. Both Claude models were unable to finish the game.[73]

In February 2026, Anthropic's researcher Nicholas Carlini reported that 16 Claude Opus 4.6 agents were able to write a C compiler in Rust from scratch, "capable of compiling the Linux kernel". The experiment cost nearly $20,000; Carlini noted that even though the compiler is not very efficient, Opus 4.6 is the first model able to write it.[74][75]

Usage

In December 2025, Claude was used to plan a route for NASA's Mars rover, Perseverance. NASA engineers used Claude Code to prepare a route of around 400 meters using the Rover Markup Language.[76][77]

In February 2026, Norway’s $2.2 trillion sovereign wealth fund began using Claude AI to screen its portfolio for ESG risks, enabling earlier divestments and improved monitoring of issues like forced labour and corruption.[78]

During a two-week scan in 2026, Claude found over 100 bugs in Firefox, of which 14 were considered high-severity.[79][80]

On 9 March 2026, Microsoft said that it will be making Anthropic's latest Claude Sonnet models available to M365 Copilot users.[81]

Military usage

In November 2024, Anthropic partnered with Palantir and Amazon Web Services to provide the Claude model to U.S. intelligence and defense agencies.[82][83] In June 2025, Anthropic announced a "Claude Gov" model. Ars Technica reported that as of June 2025 it was in use at multiple U.S. national security agencies.[84] As of February 2026, Anthropic's partnership with Palantir makes Claude the only AI model used in classified missions.[85]

According to the Wall Street Journal, the U.S. military used Claude in its 2026 raid on Venezuela. While it isn't known to what capacity Claude was used, the operation resulted in the deaths of 83 people, 2 of which were civilians, and the capture of President Nicolás Maduro.[86][87]

Anthropic's usage policy prohibits directly using Claude for domestic surveillance or in lethal autonomous weapons.[88] These restrictions led to members of the FBI and Secret Service being unable to use it,[89] and to tensions with the Pentagon and the Trump administration.[30][90] In February 2026, Financial Times reported that Defense Secretary Pete Hegseth threatened to cut Anthropic out of the DoD's supply chain if Anthropic did not permit unrestricted use of Claude, or to invoke the Defence Production Act to assert unrestricted use without an agreement.[85] On February 27, Defense Secretary Pete Hegseth declared Anthropic a supply chain risk and Donald Trump directed every federal agency to stop using technology from Anthropic, with 6 months to phase it out. Anthropic announced that it would challenge the supply chain risk designation in court.[2]

Despite the ban, Claude was reportedly used by the military during the US strikes on Iran.[91][92]

User base

Wired journalist Kylie Robison wrote that Claude's "fan base is unique", comparing it to more ordinary ChatGPT users. In July 2025, when Anthropic retired its Claude 3 Sonnet model, around 200 people gathered in San Francisco for a "funeral".[93]

According to Robison,[93]

I’ve never seen such a devoted fanbase to what is, at the end of the day, a software tool. Sure, Linux users wear the operating system like a badge of honor. But the Claude fan base goes way beyond that—bordering on the fanatical. As my reporting makes clear, some users see the model as a confidant—and even (in Steinberger’s case) an addiction. That only makes sense if they believe there is something alive in the machine. Or at least some “magic lodged within” it.

See also

References

Related Articles

Wikiwand AI