Distraction Rebellion

The Big Tech Ethical Rankings What are they like?

Check out the scoreboard of the best and the worst companies in the world of big tech. We have a robust methodology that rates these companies on a wide range of metrics, and then ranks them, in their race to the bottom.

Pick the metrics and ethics that matter most to you, and decide if these are businesses worth your time, data and money.

Full company scores

Sort by: Filter:
Company Overall Regulatory Heat 1 Data 2 Money 3 Design 4 MH & Minority 5 Env 6 Workers 7A Elections 7B Lobbying 7C Geopolitics 8 Youth 9 Tax Updated
Apple
4.2 · Decent
Low 4.5 4 4 4.2 4 4.1 4 4.2 4.1 4 4.3 2025-09-13
Alphabet (Google)
2.6 · Ropey
High 2.2 1.8 2.5 2.2 3.3 3.2 2.4 2.3 3.4 2.2 2 2025-09-13
Microsoft
3.8 · Decent
Medium 3.9 3.6 3.6 3.6 3.7 3.9 3.5 3.6 3.9 3.4 3.2 2025-09-13
Meta
2.2 · Absolute wrong’un
High 1.8 1.5 2.2 1.9 3 2.9 2 2.1 2.8 1.9 1.8 2025-09-13
Amazon
2.9 · Ropey
High 2.6 2.2 2.7 2.6 3.2 2.6 2.7 2.4 3.2 2.4 1.9 2025-09-13
ByteDance (TikTok)
1.8 · Absolute wrong’un
High 1.5 1.3 2.2 1.6 2.8 2.4 1.7 1.9 1.6 1.6 1.7 2025-09-13
X Corp (Twitter)
1.5 · Absolute wrong’un
High 1.2 1.1 1.6 1.2 2.6 2 1.2 1.6 2.2 1.2 1.5 2025-09-13
Snap (Snapchat)
2.7 · Ropey
Medium 2.4 2.3 2.8 2.5 3 2.8 2.5 2.7 2.9 2.3 2.2 2025-09-13
Reddit
2.6 · Ropey
Medium 2.4 2.1 2.7 2.3 2.9 2.7 2.4 2.5 2.9 2.3 2.1 2025-09-13
Netflix
3.6 · Decent
Low 3.6 3.4 3.5 3.5 3.4 3.6 3.4 3.4 3.8 3.4 3 2025-09-13
Spotify
3.2 · Ropey
Medium 3.1 3 3 3 3.2 3.1 3 3 3.4 3 2.6 2025-09-13
Mozilla
4.5 · Top notch
Low 4.7 4.5 4.4 4.5 4 4.4 4.3 4.6 4.6 4.3 4.2 2025-09-13
Signal
4.9 · Top notch
Low 5 4.8 4.7 4.8 4.2 4.6 4.6 4.6 4.7 4.6 4.4 2025-09-13
Proton
4.6 · Top notch
Low 4.9 4.5 4.4 4.5 4.1 4.3 4.4 4.5 4.6 4.4 4.2 2025-09-13
Fairphone
4.4 · Decent
Low 4.2 4.1 4.2 4.3 4.9 4.8 4 4.1 4.3 4.2 4 2025-09-13
Samsung
3.2 · Ropey
Medium 3 3 3 3.1 3.6 3.3 3 3 3.4 3 2.6 2025-09-13
Huawei
1.9 · Absolute wrong’un
High 1.8 1.6 2.2 1.9 3.1 2.6 2 1.8 1.3 1.9 1.7 2025-09-13
OpenAI
3 · Ropey
Medium 2.8 3.2 3.2 2.9 3 3.1 2.8 3 3.2 2.8 2.6 2025-09-13
Zoom
3.4 · Ropey
Medium 3.2 3.2 3.3 3.3 3.2 3.4 3.1 3.2 3.5 3.1 2.8 2025-09-13
Adobe
3.3 · Ropey
Medium 3.2 3 3.1 3.1 3 3.3 3 3.1 3.4 3 2.7 2025-09-13
Cloudflare
4 · Decent
Low 4 3.9 3.9 3.9 3.8 4 3.8 4 4.2 3.8 3.2 2025-09-13
Oracle
2.8 · Ropey
Medium 2.7 2.6 2.6 2.7 3 3 2.6 2.6 3.2 2.6 2.4 2025-09-13
IBM
3.4 · Ropey
Low 3.5 3.2 3.2 3.3 3.4 3.6 3.2 3.4 3.8 3.2 3 2025-09-13
NVIDIA
3.1 · Ropey
Medium 3 3 3 3 3.2 3.1 2.9 3 3.4 2.8 2.6 2025-09-13
AMD
3.2 · Ropey
Low 3.2 3 3.1 3 3.3 3.2 3 3 3.3 2.9 2.6 2025-09-13
Intel
3.1 · Ropey
Low 3 3 3 3 3.2 3.3 3 3.1 3.4 2.9 2.6 2025-09-13
Telegram
2.8 · Ropey
Medium 2.6 2.6 2.6 2.5 3 2.7 2.3 2.5 2.6 2.4 2.1 2025-09-13
Discord
2.9 · Ropey
Medium 2.8 2.7 2.9 2.5 3 2.9 2.6 2.7 2.9 2.6 2.2 2025-09-13
Pinterest
3.4 · Ropey
Low 3.3 3.2 3.3 3.2 3.2 3.3 3.1 3.2 3.5 3 2.7 2025-09-13
Uber
2.5 · Ropey
Medium 2.3 2.2 2.5 2.3 2.9 2.1 2.4 2.3 3 2.2 2 2025-09-13
Airbnb
3.1 · Ropey
Medium 3 3 3.1 3 3 3.2 3 3 3.3 3 2.6 2025-09-13
Booking.com
2.8 · Ropey
Medium 3.2 2.6 3 2.8 3 2.6 2.8 2.6 1.6 3.2 2.2 2025-09-15
PayPal
3.3 · Ropey
Medium 3.2 3.2 3.2 3.1 3 3.2 3 3.1 3.4 3 2.6 2025-09-13

Columns map to categories; Monopoly is included within “Money”.

What we measure (and why)

Each category includes what we look for, how we check it, and the key sources we rely on. Expand any section for details.

1) Data Practices & Protection

Can you trust them with your data—and can you leave with it?

We look for: short retention; data minimisation; no selling/sharing; clear purposes/legal bases; easy export & delete; AI training rules and opt‑outs; strong preventive security (audits/certs, bug bounty) and incident response (fast, candid breach handling); open standards/APIs and migration.

How we’ll check: read Privacy/ToS/AI pages (versioned); test export/delete; inspect defaults; verify audits/certs/bug bounty; review breach/incident posts and regulator actions.

Sources: Privacy/ToS; AI policy; help docs; security/trust centers; ICO/EDPB guidance & decisions; Open Terms Archive; NVD/CVE, Have I Been Pwned; BSI/SGS cert directories; Data Transfer Project, W3C/IETF.

2) How They Make Money Includes Monopoly & Competition

Is the business model built on serving you—or on squeezing your attention/data?

We look for: ads vs subscriptions/hardware; reliance on targeting or data brokerage; “pay to stop tracking” upsells; revenue levers that push over‑use.

  • Monopoly & Competition: self-preferencing, tying/bundling, default distribution, antitrust conduct, restrictive creator/publisher terms, and general business ethics.

How we’ll check: read pricing/ads policy pages; map ad products/targeting; compare with investor filings; review antitrust/competition cases and business practices.

Sources: company pricing/ad docs; IR filings (10‑K/20‑F); reputable market analyses; app‑store listings; EC/DoJ/FTC/CMA cases; DMA gatekeeper pages; credible industry studies.

3) Manipulative Design

Does the design respect your time and choices?

We look for: autoplay/infinite scroll; FOMO prompts; dark patterns in consent/checkout; real time‑limits & calm defaults; teen‑targeted tricks.

How we’ll check: hands‑on UX audit; review consent/payment flows; check help/docs for limits & controls.

Sources: company help/guidelines; CMA/ICO/FTC dark‑pattern work; Deceptive.Design; HCI research.

4) Mental Health & Minority Safety

Do they minimise real harm—especially to mental health and protected groups?

We look for: harmful‑content prevalence/exposure; views before removal; time‑to‑action; repeat‑offender handling; disparate‑impact protection across languages/regions; wellbeing controls (take‑a‑break, quiet notifications, hide likes, keyword filters, recommendation controls); independent assurance.

How we’ll check: match policy text to published enforcement metrics; test wellbeing/safety controls; review language coverage and appeals; read independent indices and regulator risk‑assessments; capture screenshots.

Sources: Community/Safety standards and enforcement/transparency reports; GLAAD Social Media Safety Index; ADL Online Hate & Harassment; Ofcom/Online Safety; EU DSA risk assessments; reputable academic work.

Floor: if < 2.5/5 for a UGC‑heavy product, overall stance can’t exceed Usable. Protected‑group safety guardrail: if credible evidence shows sustained under‑enforcement against hate toward protected groups in major markets, we may cap at Usable pending remediation.

5) Environmental Impact

Emissions, energy, and waste—beyond the hype.

We look for: science‑based targets (incl. Scope 3); credible renewables (ideally hourly match); datacentre efficiency; repairability/right‑to‑repair; LCAs; e‑waste.

How we’ll check: verify SBTi/CDP/RE100 status; read LCAs; check repair policies.

Sources: company sustainability/ESG (TCFD/CSRD annexes); SBTi, CDP, RE100, TPI; iFixit.

6) Employee & Supply Chain Treatment

How people are treated from moderation to manufacturing.

We look for: labour standards; independent audits with remediation; living wage; contractor protections (e.g., moderators); union stance; whistleblower protection.

How we’ll check: review Supplier Code; Modern Slavery statements; audit frameworks (RBA/Sedex/SA8000); incident data and remediation.

Sources: Supplier Code/MSA/ESG; KnowTheChain; WBA/CHRB; Business & Human Rights Resource Centre; US DoL lists.

7) Civic Influence & Geopolitics

How they shape civic life and behave on the world stage.

  • 7A. Elections & Civic Discourse — political‑ads rules/transparency; dis/misinformation response; researcher access; documented misuse + mitigations. Floor can apply: if < 2.0/5 in a major market, stance ≤ Usable.
  • 7B. Lobbying & Policy Influence — itemised lobbying/trade bodies; alignment with stated values; takedown/data‑request transparency & pushback.
  • 7C. Geopolitics & Sanctions — sanctions compliance; human‑rights due diligence; exposure to/state collaboration in high‑risk regimes.

How we’ll check: pull registries; compare lobbying to public positions; read takedown/data‑request reports; review HRDD/sanctions statements.

Sources: OpenSecrets; EU Transparency Register; UK Lobbying Register; DSA/OSA materials; UNGP/OECD; OFSI/OFAC/EU sanctions; OHCHR/HRW.

8) Child & Youth Impact

Are under‑18s safe by default?

We look for: private by default; unknown DMs/location/autoplay off; no ads/profiling to minors; sensible age assurance; fast reporting/appeals.

How we’ll check: verify defaults in product; examine youth safety docs; test reporting/appeals; check AADC/COPPA alignment.

Sources: Youth Safety Centers; ICO Children’s Code; Ofcom/eSafety; UNICEF; academic trust‑and‑safety research.

9) Community & Fair Tax

Do they contribute fairly—and transparently?

We look for: GRI‑207/CbCR alignment; effective tax rate vs statutory; disclosed rulings/incentives; secrecy‑jurisdiction exposure; meaningful community investment with independent governance.

How we’ll check: read tax transparency/annuals; compute headline ETR vs statutory; check CbCR (as published); review corporate foundation reports.

Sources: tax transparency & ESG; Fair Tax Mark; Tax Justice Network; GRI‑207; Charity Commission/IRS 990‑PF.

How we score (quick guide)

  • Scale: each category is scored 0–5; overall is the simple average, rounded to 0.5.
  • Evidence: 3–5 primary sources per category plus up to 2 reputable third‑party sources.
  • Scoring: each category gets a one‑line rationale and links/screenshots.
  • Regulatory Track Record (RTR): recent fines/decisions lower the relevant category (privacy, antitrust, child safety, etc.). Repeat and recent cases bite harder.
  • Floor rules (safety): weak Mental Health & Minority Safety (#4) or Elections & Civic Influence (#7A) can cap a platform at Usable.
  • Hard‑fail caps: ads to minors, sale of sensitive data, spyware‑style tracking, negligent breach response, or serious human‑rights issues cap a company at Caution/Avoid.
  • N/A handling: if a category genuinely doesn’t apply (e.g., speech for non‑UGC products), we exclude it from the average and say so.
  • Publication: scores are rounded to 0.5, published with sources and a review date, and tracked in a changelog.

RTR malus (simple version)

  • What counts: regulator decisions/fines in the last 36 months (privacy/security, speech, elections, antitrust/DMA, child safety, environment, workers, tax).
  • How it bites: we convert each action into points (Minor/Moderate/Major), apply recency/repeat multipliers, and deduct up to −1.0 from the relevant category.
  • Badge: we show Regulatory Heat (Low/Medium/High) on the company card.

Hard‑fail caps

  • Ads to minors
  • Sale/brokerage of sensitive data
  • Spyware‑style tracking
  • Major breach + negligent response
  • Sanctions/human‑rights violations

Any of the above can cap stance at Caution/Avoid.

How to read a company page

  • Top line: overall score & stance (with Regulatory Heat badge).
  • Bars: 10 categories (with sub‑scores for #7).
  • Why: 1–2 sentences per category explaining the score in simple terms.
  • Evidence: 3–5 links (policy docs/reports) + up to 2 credible third‑party sources, each with a date/version.
  • What you can do: a short “Keep / Limit / Replace” note and safer alternatives by category.

Independence & updates

  • No ads, no affiliates, no sponsorship.
  • Annual baseline review + updates after material changes (new law, major incident, product overhaul).
  • Versioned pages (v1.2, v1.3…) with a short changelog.

BEING THE PRODUCT ISN’T THE ISSUE; WHO PROFITS FROM YOU IS.