Every new technology that mediates information has triggered dire predictions about the decline of human knowledge. The printing press, the telegraph, radio, television, the internet, and search engines were all accused, in their day, of trivializing or homogenizing human thought. Large language models (LLMs) such as ChatGPT, Claude, and Gemini are the latest entrants into this lineage.
A recent Wall Street Journal article, "Will AI Choke off the Supply of Knowledge?" warned that if LLMs dominate the business of answering questions, humans will lose incentives to create new knowledge, much as index funds allegedly eroded incentives for active stock investing. The article points to examples such as Stack Overflow’s declining engagement and Wikipedia’s drop in page views as evidence that AI tools “free ride” on original content without replenishing the supply.
But this conclusion is premature, and the analogy to index funds is flawed. A closer look shows that LLMs, far from reducing incentives for originality, may become one of the most powerful engines for new knowledge creation.
1. The Index Fund Analogy Misses the Point
The comparison between passive investing and LLMs sounds neat but fails under scrutiny.
Index funds do not set stock prices themselves. Instead, they track the market. But their rise has not eliminated price discovery. Research from the Bank for International Settlements and others finds that despite passive growth, active investors still set prices because market inefficiencies remain profitable to exploit. In other words, passive investing has not destroyed the incentive to discover value; it has arguably sharpened it.
Likewise, LLMs rely on the knowledge created by humans, but they cannot erase the incentives for original creation. Just as arbitrage ensures active investors will never disappear, the economic and social rewards for novelty, accuracy, and insight ensure that humans will continue to generate new knowledge. The rise of a tool that organizes and synthesizes does not remove the demand for discovery.
2. LLMs Are an Evolution of Search, Not a Replacement for Human Inquiry
Another core claim is that LLMs are fundamentally different from earlier technologies like search engines. But the evidence suggests otherwise: LLMs are a natural extension of search technology.
Search engines already “free ride” on publisher content, aggregating and presenting it without compensation to the original authors. Yet instead of killing off original content, search created massive new industries including but not limited to digital publishing, blogging, online journalism, and SEO-driven marketing. As economist Hal Varian, Google’s chief economist, put it: “Search engines increase the value of content by reducing the cost of finding it."
LLMs operate on the same principle. Instead of returning a ranked list of links, they provide synthesized, conversational answers. They are not replacing the fundamental function of search; they are extending it into a new interface. Google itself describes its generative search experience (SGE) as “an evolution of search” that builds on decades of indexing and ranking technology.
Indeed, researchers at Stanford and Berkeley argue that LLM-based systems “should be viewed as part of the continuing trajectory of information retrieval technology,” not a discontinuity.
The pattern is familiar: each time discovery tools improve, critics predict that creation will suffer. But the opposite has historically been true; better discovery tools create more demand for content to be discovered.
3. LLMs Incentivize, Rather Than Discourage, Original Content
Far from diminishing incentives to create, LLMs may strengthen them in three ways.
a. Dependence on Fresh Knowledge: LLMs degrade quickly without access to updated data. The phenomenon of “model collapse” illustrates that training on AI outputs alone leads to worse models. The practical fix is simple: train on new human-generated content. This ensures that content creation remains not only valuable but indispensable to the AI ecosystem.
b. Pressure for Differentiation: If AI systems make general knowledge more accessible, the premium on specialized, original, or proprietary knowledge only rises. Businesses, academics, and creators seeking visibility will double down on producing work that stands apart. Examples include research not yet summarized by LLMs, proprietary data unavailable to general crawlers, or deeply contextual insights that a model cannot fabricate.
c. New Monetization Channels: Far from free-riding, LLM developers are already paying for content. OpenAI, Anthropic, and Google have struck licensing deals with publishers such as the Associated Press, Axel Springer, and Reddit. These partnerships create direct economic incentives for content production, much like Google AdSense did in the search era. Over time, publishers who offer unique, high-value knowledge are positioned to benefit from new revenue streams.
4. Evidence from History: Knowledge Creation Flourishes with New Platforms
History suggests that improved aggregation or distribution tools do not kill creativity. Overwhelmingly, they expand it.
-
Search Engines (2000s): Critics feared Google would steal traffic. Instead, SEO and digital publishing became billion-dollar industries.
-
Wikipedia (2000s–2010s): Despite fears it would destroy scholarly research, it became a complementary reference source, while academic publishing output reached record highs.
-
Stack Overflow (2010s): While AI tools may reduce simple Q&A traffic, software development knowledge-sharing has migrated to GitHub Discussions, Discord, and other forums. Content creation doesn’t disappear; it adapts to new channels.
In each case, the pattern is the same: when access improves, content creation diversifies and expands. LLMs fit this trajectory.
5. The Real Risk Is Not Knowledge Decline
The real risk is not that humans will stop creating knowledge, but that some may take time to adapt. Universities, publishers, and platforms must rethink incentive structures, just as they did during the rise of the internet. Academic tenure systems, advertising models, and open-source communities will need to evolve alongside LLMs.
But evolution is not collapse. Just as the printing press ultimately expanded human knowledge despite fears of “too many books,” LLMs will expand the reach and value of human insights.
Conclusion: LLMs Will Make Knowledge Markets More Dynamic
The alarmist view that LLMs will “dumb down” the web misunderstands both the economics of incentives and the history of information technology. Index funds didn’t kill stock markets. Search engines didn’t kill publishing. And LLMs won’t kill knowledge creation.
Instead, they will shift incentives toward novelty, differentiation, and quality, while opening up new monetization pathways. Far from a blender that purées knowledge into mush, LLMs may become the very engine that pushes humans to create new, original, and valuable ideas.
The challenge ahead is not to prevent LLMs from existing, but to ensure that we evolve quickly enough to harness their potential for knowledge growth.