Distraction Rebellion

OpenAI Ethics & Impact review (v1.2)

Overall
2.5 / 5
Rounded to 0.5 after floor/cap rules.
Regulatory Heat
Medium

Italy’s 2023 order; FTC inquiry; active copyright case(s).

Stance (UK tone)
Ropey

Strong safety work, shaky governance & sourcing; thin env. transparency.

Not ad-funded and often quick with abuse mitigations—but a lot of the “good” arrived after regulators knocked or the press found problems. Add in contested data sourcing, governance wobbles, labour issues and scant environmental disclosure, and you end up here: principled in parts, pragmatic when pushed. Our view: use with caution, and keep an eye on the paperwork.

Category scores

1) Data Practices & Protection2.6
2) How They Make Money (inc. Monopoly)3.1
3) Manipulative Design3.6
4) Mental Health & Minority Safety3.0
5) Environmental Impact2.0
6) Employee & Supply Chain2.1
7A) Elections & Civic Discourse3.2
7B) Lobbying & Policy Influence2.2
7C) Geopolitics & Sanctions3.0
8) Child & Youth Impact2.8
9) Community & Fair Tax2.0

Regulatory track record (36 months)

We reflect these in category malus (Data; Business model/ethics; Governance/Civic).

Category write-ups

1) Data Practices & Protection — 2.6 / 5

API data isn’t used for training by default; ChatGPT has a history/training toggle. That’s good—though the Italian case shows controls often arrived after regulatory pressure.

  • Pros: API data opt-out by default; user controls to exclude chats from training; public incident report and bug bounty.
  • Cons: Training-data provenance contested (copyright suits); GDPR-driven fixes; 2023 Redis bug.

2) How They Make Money (incl. Monopoly/Competition) — 3.1 / 5

Subscriptions, API/enterprise and licensed content deals—not ads. Licensing momentum (AP, FT, News Corp, Axel Springer, Reddit) is positive, but pending copyright cases keep the ethics of earlier web-scale scraping in dispute.

  • Pros: Pay-for-product (Plus/Teams/Enterprise); licensing with major publishers reduces reliance on unlicensed data.
  • Cons: Ongoing lawsuits over past training data; FTC inquiry into practices; heavy strategic tie-up scrutiny in EU.

3) Manipulative Design — 3.6 / 5

No infinite feeds or surveillance ads; some nudging to upgrade, but relatively calm defaults compared with social platforms.

  • Pros: Task-first UI; minimal attention traps.
  • Cons: Occasional pushy upsells; hallucinations can still mislead if you’re not careful (separate from “time-waste”, but relevant to outcomes).

4) Mental Health & Minority Safety — 3.0 / 5

Safety policies, system cards and teen mode help—but LLMs can still produce harmful or biased content when probed. External evidence shows LLMs can be steered towards disinformation, so guardrails matter. :contentReference[oaicite:10]{index=10}

  • Pros: Safety policies; red-team style research access in places; some transparency on risky use.
  • Cons: Inconsistent measurement disclosure (no regular, comparable prevalence metrics like social platforms); prompt-based circumvention remains a risk. :contentReference[oaicite:12]{index=12}

5) Environmental Impact — 2.0 / 5

High-compute models imply non-trivial energy/water use, but there’s no dedicated, company-level public emissions report or model LCA from OpenAI we can cite; disclosure is thin compared with best-in-class sustainability reporting.

  • Pros: None clearly evidenced on public emissions transparency to date.
  • Cons: No robust, model-level footprint disclosure we could locate on openai.com; industry research flags AI energy intensity generally. (We’ll revise if/when OpenAI publishes formal data.)

6) Employee & Supply Chain — 2.1 / 5

Investigations found low-paid contractors moderating toxic data; significant safety-team turnover in 2024 raised culture questions.

  • Pros: Some acknowledgement of the need for better tooling and safety processes over time.
  • Cons: Kenyan moderation outsourcing on very low wages (historic); dissolution/attrition of safety functions.

7A) Elections & Civic Discourse — 3.2 / 5

Election policies, partnerships, and takedowns of covert IO are positives; research still finds LLMs can be bent towards civic harm if guardrails slip. :contentReference[oaicite:15]{index=15}

  • Pros: Bans on impersonation/campaigning; image credentials; IO disruption reports. :contentReference[oaicite:16]{index=16}
  • Cons: Enforcement transparency is still early; external studies show misuse remains tractable. :contentReference[oaicite:17]{index=17}

7B) Lobbying & Policy Influence — 2.2 / 5

Reported hard lobbying on the EU AI Act (including a brief threat to leave, later walked back) suggests policy aims not always aligned with the strongest safeguards.

7C) Geopolitics & Sanctions — 3.0 / 5

Country availability and sanctions compliance appear standard for a US company—service is blocked in a number of high-risk/embargoed locations.

8) Child & Youth Impact — 2.8 / 5

Age limits and a teen mode exist; in some countries OpenAI now uses ID-based age checks (e.g., Yoti). Elsewhere, age gating is still lighter. Advertising to minors isn’t an issue (no ads).

  • Pros: Teen-specific safeguards; ID checks in certain regions post-regulatory pressure.
  • Cons: Not yet a uniform, robust global age-assurance standard across all markets.

Age verification adequacy: Mixed. Stronger in some jurisdictions (ID-based), weaker elsewhere (self-declared age).

9) Community & Fair Tax — 2.0 / 5

No public, GRI-207-style country-by-country tax transparency we could find; overall disclosure on community investment and grants is limited compared with best-in-class.

  • Pros: N/A — insufficient formal transparency to credit.
  • Cons: Sparse tax/community reporting; unclear effective tax rate geographically.

What you can do

We’ll revise scores if OpenAI publishes robust sustainability/tax disclosures, expands ID-based age checks globally, or settles the major IP cases with clear terms.

Key sources