Jul 2025
This influence was applied...to punish whistleblowers, chill dissent, and incentivize complacency in the face of overblown claims masked in scientific authority. It is here, in these darker histories, that we confront the steep cost of capture — whether military or industrial — and its perilous implications for academic freedom and knowledge production capable of holding power to account.
In The Steep Cost of Capture, Meredith Whittaker — President of the Signal Foundation, former NYU professor, and former head of Google’s Open Research Group — argues that big tech money has systematically biased research into the societal costs and benefits of LLMs. She encourages academics and tech workers to call out this systemic bias, openly criticize LLMs' shortcomings, and organize politically.
I read this article as part of an effort to re-acquaint myself with the state of LLMs and the debates surrounding them. Meredith Whittaker’s background made me especially interested in her views.
Meredith makes a five-part argument:
As you’d expect from a short article, this is a simple argument for more skepticism during an obvious boom, more academic freedom, and more focus on regulating and pricing the externalities of new techology.
At that altitude, I’m on board. I did think that Meredith’s argument would have been more compelling if she directly addressed two inconvenient facts.
Against a backdrop of pessimism about technology, it’s easy to nod along as Meredith paints LLMs as a cynical ploy by wealthy, entrenched incumbents. But the brand most associated with LLMs isn’t Google’s Gemini, Amazon’s Q, Apple’s Siri, or Microsoft’s Copilot — it’s OpenAI’s ChatGPT.
Sure, tech giants did much of the early research into LLMs. But the confluence of compute, data, and talent required to develop LLMs isn’t only available at tech giants: OpenAI has it, Anthropic has it, Mistral has it, and a crop of newer startups seems poised to develop it too. The barriers to entry are high in absolute terms, but modest relative to the perceived market size and size of today’s venture funds. And critically, frontier models clearly don’t require decades of proprietary data.
This doesn’t reduce researchers’ dependence on industry goodwill. But to me, it does change the moral valence of the story: rather than a group of monopolists bamboozling the rest of us, the current boom starts to look more like both incumbents and new players racing to commercialize a capital-intensive new technology. That’s messy, but it’s how every capitalist society funds disruptive technology.
Meredith also suggests that funding research into old LLM algorithms is sucking attention away from more promising new research. Obviously, she knows much, much more than I do about LLMs and machine learning. But my impression is that LLMs are useful precisely because they can take advantage of more compute and more data without theoretical breakthroughs.
Rich Sutton, 2024 Turing award winner, famously called this the bitter lesson:
The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
This is good! We can spend money to improve models when we think the results might justify the cost.
Again, this changes the moral valence of the story. Frontier labs haven’t just rebranded dusty old algorithms; they’ve shown the world how much we can do with more computing horsepower — and they’ve sparked investment and innovation in hardware, data centers, and software. 19th century railroad tycoons would recognize the shape of this boom (and the possibility of a corrective financial crisis).
So by all means, let’s be appropriately skeptical. Let’s support unbiased research (go METR!). Let’s engage with the politics of disruptive technology. But let’s also acknowledge that there’s something incredible taking shape right before our eyes.