Skip to content
EN DE

Responsible AI

Your company is about to launch an AI feature that pre-screens job applications. The CEO gave the green light, the engineering team is ready. Then someone asks: “Have we actually checked whether this discriminates?”

Silence in the room. Someone points to the “AI Principles” on the company website. But nobody knows whether there’s an actual process that could stop the launch if the feature turns out to be unfair.

This is the moment where it becomes clear whether Responsible AI at your company is real — or just a PDF.

Responsible AI is not a badge or a checklist. It is the operationalized commitment to developing, deploying, and maintaining AI systems that are safe, fair, transparent, and accountable — across the entire product lifecycle.

The difference between real and performative Responsible AI: Has your company ever delayed or stopped a launch because of safety concerns? If not, your AI principles are decoration.

Anthropic uses two pillars: Constitutional AI (the model evaluates its own outputs against an inspectable set of principles) and the Responsible Scaling Policy (RSP 3.0). RSP defines AI Safety Levels (ASL-1 through ASL-4+), where higher capabilities automatically trigger stricter safety requirements. ASL-3 standards were activated for Claude Opus 4 — as a precautionary measure.

OpenAI uses the Preparedness Framework (v2). Two thresholds — High and Critical — determine when safeguards kick in. Controversy: OpenAI stated in 2025 that safety requirements could be “adjusted” if a competitor releases a risky system without similar protections.

Google combines AI Principles (since 2018) with the Secure AI Framework (SAIF) and the Frontier Safety Framework. Critics noted that the “bold and responsible” framing makes it difficult to determine when safety actually constrains product decisions.

The EU AI Act is the world’s first comprehensive AI law, using a risk-based classification system:

Risk TierExamplesRequirements
UnacceptableSocial scoring, real-time biometric identificationBanned
High-RiskAI in hiring, credit scoring, medical devicesConformity assessments, CE marking, human oversight
Limited RiskChatbots, deepfakes, emotion recognitionTransparency obligations
Minimal RiskSpam filters, AI in gamesNo specific obligations

Staggered timeline: Since February 2025, prohibited practices are in effect. Since August 2025, GPAI rules and transparency obligations apply. From August 2, 2026, comprehensive requirements for high-risk systems take effect. Penalties: up to EUR 35M or 7% of global revenue.

The Responsible AI Reality Check — six steps for every AI feature launch:

StepActionOutcome
1. CLASSIFYMap your feature to EU AI Act risk tiersRegulatory baseline
2. IDENTIFYName concrete harm scenarios (not abstract “bias”)Specific risks
3. DEFINESet safety mechanisms BEFORE launchGuardrails, kill switches
4. ESTABLISHCreate escalation paths (who decides when things go wrong?)Clear accountability
5. DOCUMENTRecord decisions and rationaleAudit trail
6. MONITORContinuous oversight post-launchOngoing safety

Scenario: Clearview AI — When Responsible AI Is Ignored

Section titled “Scenario: Clearview AI — When Responsible AI Is Ignored”

Clearview AI built a facial recognition search engine. The product: upload a photo, instantly find all publicly available images of that person — along with the sources. To do this, Clearview scraped over 20 billion images from social media platforms, news sites, and other public sources. Without the knowledge of those depicted. Without consent. Without any opt-out mechanism.

Primary customers: US law enforcement agencies. Clearview marketed the tool as an investigative aid — a “Google for faces.” There was no internal ethics board, no Responsible AI framework, no self-imposed constraints. The defense: “The data is publicly available.”

The facts:

  • France (CNIL): EUR 20M fine (2022)
  • Italy (Garante): EUR 20M fine (2022)
  • Greece (HDPA): EUR 20M fine (2022)
  • UK (ICO): GBP 7.5M (~EUR 9M) fine (2022)
  • Netherlands (AP): EUR 30.5M fine (2024)
  • Australia (OAIC): operations declared unlawful (2021)
  • ACLU settlement (2022): Clearview prohibited from selling to private companies in the US
  • Total penalties: over EUR 100M

Regulators in every jurisdiction rejected the “publicly available data” argument. Publicly visible does not mean freely exploitable. Consent is still required.

Under the EU AI Act, real-time biometric identification in public spaces falls under Unacceptable Risk — the product would simply be banned. Clearview continues operating despite international prohibitions and sells to US agencies.

The question: What would have happened if someone at Clearview had applied the Responsible AI Reality Check?

Clearview AI through the Reality Check

Imagine you were a PM at Clearview in 2019. The product is built, the first law enforcement customers are interested. You apply the Reality Check:

1. CLASSIFY — Map the feature to EU AI Act risk tiers

Biometric identification of individuals in public spaces falls under Unacceptable Risk. Full stop. No compliance pathway, no mitigation catalog — the product is outright banned in the EU. At this very first step, any PM with basic knowledge of AI regulation would have pulled the emergency brake.

What Clearview did: No classification. No regulatory analysis. Just kept going.

2. IDENTIFY — Name concrete harm scenarios

The harm scenarios are obvious: mass surveillance without the knowledge of those affected. Abuse by stalkers if the tool falls into the wrong hands. False positives that drag innocent people into investigations. Chilling effects on free speech when every person in public can be identified. Discrimination through differential error rates across ethnic groups — a well-documented problem in facial recognition.

What Clearview did: No harm analysis. Not a single one of these scenarios addressed.

3. DEFINE — Set safety mechanisms BEFORE launch

What would have been needed: consent mechanisms for data collection. Opt-out for affected individuals. Access controls for the tool. Accuracy thresholds. Usage restrictions.

What Clearview did: None of this. No consent, no opt-out, no guardrails, no kill switch.

4. ESTABLISH — Create escalation paths

Who decides when something goes wrong? Who shuts down the system in cases of misuse? Who is accountable for harm to affected individuals?

What Clearview did: No ethics board. No escalation paths. No defined accountability. When regulators identified violations, the response was not correction but resistance.

5. DOCUMENT — Record decisions and rationale

An audit trail would have documented: Why was data collected without consent? What legal basis was assumed? What risks were assessed and accepted?

What Clearview did: No documentation. The only publicly known rationale — “the data is public” — was rejected by every single regulator.

6. MONITOR — Continuous oversight post-launch

Monitoring would have meant: accuracy tracking across demographic groups. Misuse detection. Regular compliance reviews against evolving regulation.

What Clearview did: No monitoring. Instead, the database grew to 20+ billion images while bans and fines piled up across the globe.

The result: Clearview failed at every single step. Not partially, not almost — completely. The outcome: over EUR 100M in penalties, bans in multiple countries, and the status of a canonical negative case study for AI ethics violations.

Responsible AI is not a gate before launch — it’s a property of the entire process. Principles without enforcement are worthless. The EU AI Act makes this a regulatory obligation.

  • “Publicly available” does not mean “free to exploit.” Clearview’s entire defense rested on a fallacy that every single regulator rejected. As a PM, you must validate legal assumptions — not pick the interpretation that is most convenient.
  • Without internal resistance, there is no course correction. Clearview had no ethics board, no escalation path, nobody who could or would say no. Responsible AI requires institutionalized friction — processes that can stop a launch, not merely accompany one.
  • Regulation is coming — the only question is whether you are prepared. Clearview built a product that falls into the highest prohibition category under the EU AI Act. Anyone who had run the Reality Check in 2019 would have seen this in five minutes. Over EUR 100M in fines later, the lesson is expensive but unambiguous.
  • Negative examples teach more than principle PDFs. Clearview is proof: if you ignore every step of a Responsible AI framework, the world counts the consequences in nine-figure sums.

Sources: CNIL Clearview AI Decision (2022), Garante per la protezione dei dati personali Decision (2022), HDPA Decision (2022), ICO Monetary Penalty Notice (2022), Autoriteit Persoonsgegevens Decision (2024), OAIC Determination (2021), ACLU v. Clearview AI Settlement (2022), EU AI Act Art. 5 — Prohibited Practices, Anthropic RSP 3.0 (2026), OpenAI Preparedness Framework v2 (2025), Google AI Principles

Part of AI Learning — free courses from prompt to production. Jan on LinkedIn