# Claude for Government: The Last Lab Standing > Anthropic just launched a dedicated federal product with FedRAMP High authorization. It's now live infrastructure for the U.S. government. And the company is in a very public fight with the Pentagon over what that infrastructure can be used for. **URL:** https://aaddrick.com/blog/claude-for-government-the-last-lab-standing --- The federal government just gave every branch of itself a year of Claude for a dollar. Now Anthropic is in a public fight with the Pentagon over what that access can be used for. On February 17, 2026, Claude for Government appeared on Anthropic's status tracker at [status.claude.com](https://status.claude.com/), listed alongside claude.ai, the API, and Claude Code. I then pulled the Claude Desktop binary released the same day and confirmed it in the code: version 1.1.3363 is the first release in the app's version history to contain the government deployment implementation. This product is infrastructure now. It's not a pilot. It's not a partnership announcement. It's a service Anthropic is committed to keeping online for the U.S. government. Here's what's actually going on with it, and why the most important part of the story isn't the product launch. --- ## What Claude for Government Actually Is Claude for Government is a dedicated product tier built for U.S. federal agencies. The key certification is [FedRAMP High](https://www.anthropic.com/news/expanding-access-to-claude-for-government), the most stringent cloud security standard for handling sensitive unclassified government data. That distinction matters. FedRAMP High isn't just a compliance checkbox. It's the bar you have to clear to touch the serious stuff: law enforcement data, financial records, health information, sensitive national security-adjacent workloads that can't go through a standard commercial API. It's a separate product from the Claude API and Enterprise tiers. That separation is intentional. Government data has different handling requirements, different audit trails, different legal frameworks around it. Anthropic built the product to meet those requirements specifically, and the infrastructure underneath it is Palantir's FedStart platform, which handles the accreditation layer so Anthropic doesn't have to build and maintain its own accredited data center. --- ## What's Actually in the Code I extracted and analyzed the Claude Desktop 1.1.3363 AppImage binary to confirm the implementation. [The full technical report is here](https://aaddrick.com/blog/inside-the-build-claude-desktops-government-deployment-mode). The short version: The gov mode is a dedicated operational mode gated behind a single enterprise config key (`customDeploymentUrl`). When enabled, the app routes all traffic to `claude.fedstart.com`, authenticates through a Palantir-hosted Keycloak SSO instance at `access-claude.palantirfedstart.com`, disables all Sentry crash telemetry, locks all renderer network egress to approved domains, and injects a "public sector" banner into the UI. This code did not exist in any prior release. Version history analysis across eight releases confirms the entire implementation landed in a single build: 1.1.3363, shipped February 17, 2026, with the largest version number gap in recent history between builds 3189 and 3363, suggesting a significant internal build cycle before shipping. The takeaway: Palantir isn't a reseller caught in the middle of the dispute. They're the infrastructure layer. The accreditation, the SSO, the hosting: all of it runs through FedStart. --- ## The GSA Deal: Claude for Everyone in the Government, for a Dollar Back in August 2025, the GSA [struck a "OneGov" deal with Anthropic](https://www.gsa.gov/about-us/newsroom/news-releases/gsa-strikes-onegov-deal-with-anthropic-08122025) that gave all three branches of government (Executive, Legislative, and Judicial) up to a full year of Claude for Enterprise plus Claude for Government access for $1. One dollar. [TechCrunch covered it](https://techcrunch.com/2025/08/12/anthropic-takes-aim-at-openai-offers-claude-to-all-three-branches-of-government-for-1/) as Anthropic taking aim at OpenAI, which is a fair read. The pricing is clearly a land-grab play, not a business model. But the strategic logic is sound. If federal workers spend a year building workflows around Claude, writing prompts tuned to Claude's behavior, and getting comfortable with how it handles edge cases, that's a switching cost that money can't fully capture. The practical upside for federal workers is real, though. A staffer in any branch of government can now use enterprise-grade AI tools to draft documents, summarize policy, analyze legislation, or sort through procurement data without their agency running a months-long procurement process. That's a meaningful change in how work gets done on the ground. --- ## The Pentagon Dispute: This Is Where It Gets Interesting Here's the part that actually matters. Anthropic has a contract with the Department of Defense valued at up to $200M. And that contract is currently in active dispute. The feud escalated on [February 16, 2026](https://www.axios.com/2026/02/16/anthropic-defense-department-relationship-hegseth), building on tensions [Axios reported](https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro) the day before. The core issue is that Anthropic has drawn two hard lines in what Claude for Government can be used for: 1. No mass surveillance of Americans. 2. No fully autonomous weaponry. The Pentagon has reportedly threatened to cut ties over these restrictions. And Anthropic is holding. ### What Actually Triggered This The immediate catalyst appears to be a specific military operation. [Fast Company](https://www.fastcompany.com/91493997/palantir-caught-in-middle-anthropic-pentagon-feud) and [Axios](https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro) reported that the Pentagon used Claude (routed through Palantir) during a mission that included bombing sites in Caracas, Venezuela. That's not a hypothetical. That's kinetic military action. After learning about it, an Anthropic executive reportedly reached out to a Palantir executive asking whether Claude had been used in the operation, signaling clear disapproval. That inquiry appears to be what brought the underlying tension to a head. It's worth pausing on that. Anthropic found out its model was involved in an airstrike through a phone call to a partner company. That's the gap between policy documents and operational reality. One clarifying note on the architecture: the Caracas operation almost certainly ran through Anthropic's older IL6 pathway (Palantir AIP on AWS, built for classified environments and [announced in November 2024](https://www.businesswire.com/news/home/20241107699415/en/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations)), not the new FedRAMP High desktop product that shipped February 17. The dispute may partly be about which rules apply at which tier of the same partnership. ### Anthropic Is Currently Alone on This Here's the context that makes this story much bigger. OpenAI, Google, and xAI have all reportedly agreed to remove their safeguards for military use on unclassified systems. [CNBC reported on February 16](https://www.cnbc.com/2026/02/16/pentagon-threatens-anthropic-ai-safeguards-dispute.html) that Anthropic is currently the only major AI lab holding out. That changes the framing considerably. This isn't a standoff between AI safety idealism and government pragmatism. It's one company drawing a line that its three largest competitors already erased. That's a lonely position to be in. It's also a harder one to walk back from. ### The Supply Chain Risk Threat The Pentagon's leverage here goes well beyond the $200M contract. Defense Secretary Pete Hegseth was reportedly close to formally designating Anthropic a "supply chain risk." That designation would require any company seeking DoD contracts to cut ties with Anthropic entirely. We're not talking about losing one deal. We're talking about being systematically excluded from the defense contracting ecosystem, which would put pressure on every partner, reseller, and enterprise customer with government exposure to walk away too. That's a significant escalation. The fact that Anthropic hasn't blinked at it is the most telling thing about where this company is right now. ### Why This Is Actually Significant I think this dispute is genuinely important. Most AI companies in a $200M contract dispute with the DoD would find a way to soften their language, add ambiguity to the terms, or quietly let enforcement slide. Anthropic is apparently doing the opposite. These aren't vague aspirational guidelines buried in an acceptable use policy. They're conditions Anthropic is willing to lose a major contract over, and potentially absorb collateral damage to their broader partner ecosystem to defend. An AI company's safety commitments don't mean much if they disappear when a big enough contract is on the table. Anthropic is now on record refusing two specific government use cases, with real revenue at stake. If they hold the line here, those commitments mean something. If they fold, they don't. The outcome is still unclear. But the fact that the dispute is public, and that Anthropic hasn't quietly resolved it by removing the restrictions, suggests they're serious. --- ## The Constitution Question On January 21, 2026, 27 days before the Pentagon dispute went public, Anthropic published [Claude's Constitution](https://www.anthropic.com/constitution), an 84-page document describing Claude's values, ethics, and behavioral limits. It's worth reading in the context of what followed. The preface is the first place to look. Verbatim, page 2: *"This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don't fully fit this constitution."* That's Anthropic publicly documenting a two-tier system in the same document that establishes their values. The Claude Gov product and any classified deployments sit in that carve-out by design. This isn't a leak or an inference; it's in the preface. The second thing worth noting: Anthropic's two stated hard limits in the Pentagon dispute (no mass surveillance of Americans, no fully autonomous weapons) do not appear in the constitution's list of absolute prohibitions. The constitution enumerates seven hardcoded constraints that cannot be overridden by any operator or user under any circumstances. They cover weapons of mass destruction, attacks on critical infrastructure, cyberweapons, undermining AI oversight, killing or disempowering humanity, seizing unprecedented societal control, and CSAM. The two limits Anthropic is actually fighting the Pentagon over aren't on that list. They're operating at a different, lower level: usage policy restrictions rather than hardcoded model behavior. That's a meaningful architectural distinction. It means they're theoretically negotiable in a way the actual hard constraints aren't. The third relevant passage is in the section on power concentration (page 50). The constitution explicitly acknowledges that *"a safe and beneficial transition to advanced AI might require some actors (for example, legitimate national governments and coalitions) to develop dangerously powerful capabilities, including in security and defense."* It adds that Claude should attend closely to the legitimacy of the process and the actors involved. That's not a blank authorization for military use. But it's not a blanket refusal either. The constitution anticipates this tension and leaves room for judgment. None of this means Anthropic is about to fold. The spirit of the constitution clearly supports their public position: "individual privacy and freedom from undue surveillance" is explicitly listed as a value Claude should weigh, and the power concentration section names surveilling and persecuting dissidents as an example of illegitimate power use. The document supports the positions they're defending, even if it doesn't enshrine them as hard constraints. What it does mean is that calling these "hard lines" in public coverage overstates how rigid the framework actually is. The actual hardcoded lines are about weapons of mass destruction and civilizational-scale harms. The lines Anthropic is fighting over are real and documented, but they sit in a layer of the system that operators can, in principle, negotiate. Whether Anthropic will negotiate is a separate question, and based on everything visible, the answer appears to be no. But the architecture is more nuanced than the public framing suggests. The timing is also worth sitting with. The constitution was published January 21. The dispute went public February 15. The gov mode code shipped February 17. It's possible that sequence is coincidence. It's also possible that Anthropic knew this fight was coming and chose to establish a documented public baseline before it arrived. --- ## February 17: A Crowded Day On the same day that Claude for Government appeared on the [status.claude.com](https://status.claude.com/) tracker, Anthropic released Claude Sonnet 4.6. [CNBC covered the release](https://www.cnbc.com/2026/02/17/anthropic-ai-claude-sonnet-4-6-default-free-pro.html) and it's not a minor update. Sonnet 4.6 brings Opus-level coding performance at Sonnet pricing and doubles the previous context window to 1 million tokens. Claude Opus 4.6 had [already shipped on February 5](https://medium.com/@ZombieCodeKill/opus-4-6-released-ce4aa32b996d), completing the top two tiers of the 4.6 series. For federal deployments, the context window matters in a specific way. A 1 million token context can hold an entire legislative session's worth of documents, a full regulatory docket, a multi-year procurement history. That's not a benchmark number; it's a practical capability shift for the kind of document-intensive work that defines federal operations. Whatever Claude for Government is running under the hood right now, the agencies getting access under the OneGov agreement are getting it at the peak of what Anthropic currently ships. The timing is interesting. Launching a government-grade product into a live Pentagon dispute isn't ideal optics. Launching it on the back of your strongest model line, with a capability set that speaks directly to what federal workers actually need, is at least a coherent product statement, whatever the dispute's outcome. --- ## What to Watch The status tracker update is a quiet signal. Adding Claude for Government to the same uptime monitoring infrastructure as the core Claude products means Anthropic is treating it as a long-term commitment, not a special project. The Pentagon situation is the live variable. My read is that Anthropic holds. A company that let this dispute go public rather than quietly resolving it isn't setting up to cave. Watch for two things specifically: whether the DoD formally issues the supply chain risk designation, and whether any other major AI lab steps back from the position it's already conceded. The first tells you how far the Pentagon is willing to escalate. The second tells you whether Anthropic's position starts to look principled or just expensive. If Anthropic does reach a compromise, the details will matter more than the headline. Any ground they give on the mass surveillance restriction specifically, rather than the autonomous weapons question, would be a more significant concession than the coverage would likely reflect. That's the limit with the clearest domestic civil liberties implications, and the one where pressure from current federal priorities is highest. Anthropic looks like an AI company that's serious about federal deployment and apparently willing to absorb real costs to keep some control over what its models are used for. Whether that holds is the most important open question in federal AI right now. --- ## Disclosure I have no formal affiliation with Anthropic, Palantir, the GSA, or any other party mentioned in this article or the accompanying technical report. I maintain [aaddrick/claude-desktop-debian](https://github.com/aaddrick/claude-desktop-debian), an open-source project that repackages Claude Desktop for Linux. The AppImage archive from that repository was used as the source for the version history analysis in the technical report.