Home / Uncategorized / I love AI. Why doesn’t everyone? – Review

I love AI. Why doesn’t everyone? – Review

Publisher: Noahpinion (Substack)

This reads like techno‑romance for gullible elites. He comes off like a lovesick grad student too emotionally invested to tolerate counterevidence. Performatively rational, actually triggered.

That title is a faceplant. “I love AI. Why doesn’t everyone?” isn’t analysis. It’s a whine dressed up as a question, and it presumes the conclusion. If you’re going to sneer at public fear, bring evidence about attitudes, incentives, and power. He didn’t.

What we get is a therapy session masquerading as an op‑ed. The guy writes like he swapped critical faculties for beta access. Every caveat melts into a love letter to his chatbot. Dissenters get branded as “motivated” or “triggered.” The publisher clearly wanted prestige clicks and let it run as-is, complete with sloppy curation that screams bias. Buried and inconsistent footnotes, a chart with a garish Comic Sans caption, and a sidebar randomly set in Times New Roman. If the design can’t keep its story straight, don’t expect the argument to.

His “debunkings” are an exercise in cherry‑picking. He leans on one convenient takedown and acts like it settles the environmental debate. It doesn’t. National water percentages tell you almost nothing about localized stress where hyperscalers actually sit. Timing matters. Training and inference load the grid in different ways. Siting near drought‑prone counties concentrates harm. Lifecycle carbon, mining, e‑waste, thermal cooling, transmission build‑out, grid congestion pricing. All waved away because a couple viral claims were sloppy. One clean counterexample doesn’t erase the system‑level costs.

Labor gets the same shallow treatment. “No mass job losses yet” is not a serious argument. Automation hits tasks first, then roles, then regions. It shifts bargaining power, pushes cohorts into worse rungs, and concentrates rents with capital and whoever owns the models, data, and distribution. You don’t get to hand‑wave that because a short‑run study didn’t show a collapse last quarter.

The structural risks he labels “motivated reasoning” are exactly the ones that require a spine. Compute and data monopolies. Surveillance and surveillance‑adjacent deployment. Opaque failure modes and liability dodges. Copyright and consent messes. Model access as a chokepoint for entire industries. You can love the tech and still confront governance; he refuses and calls it culture war.

And yes, the anecdote parade is embarrassing. “My little robot friend” is cute, but your personal productivity bump is not public policy. This reads like techno‑romance for gullible elites. He comes off like a lovesick grad student too emotionally invested to tolerate counterevidence. Performatively rational, actually triggered.

There’s also nothing concrete on what to do. No serious plan for worker transition, audits, model access rules, compute taxation, or liability for hallucination‑driven harm and deepfakes. Just vibes and PR advice to make AI look friendlier. Empty calories.

If he wanted to persuade skeptics, he’d bring representative surveys with causal work on attitude gaps by country and class, localized grid and water analysis instead of national averages, real distributional economics, and hard policy. Instead we get beta‑tester diary entries and a few smug debunks. Fun read, bad argument. The headline promises a why. The article delivers a crush note.

Tagged:

Leave a Reply