One of the central paradoxes of our time is that people feel the world is both accelerating and standing still. To navigate this contradiction, newsrooms must help their audiences make sense of what artificial intelligence means not just for technology, but for their daily lives. The news organizations that can explain the profound changes in our workplaces, communities, and relationships will find themselves more essential than ever.
But to explain AI, journalism must first understand it. The intellectual foundations of artificial intelligence—from its collaborative origins to its current limitations—reveal everything we need to know about the role humans must play in a world reshaped by machines.
The Man-Computer Symbiosis
In 1960, J.C.R. Licklider wrote a paper that would fundamentally shape the next six decades of computing. Licklider wasn't just a theorist—he was the architect of ARPANET (the internet's predecessor) and the visionary who funded the research that created personal computers, graphical interfaces, and the foundations of modern AI. His influence on technology rivals that of any single individual, I got to know him through the very through biography by Mitchell Waldrop titled The Dream Machine.
Licklider envisioned a future built on "Man-Computer Symbiosis," a partnership that would amplify human thinking, not replace it. He saw computers as tireless assistants, handling the routine cognitive work of processing and pattern-matching, freeing humans to do what they do best: set goals, exercise judgment, and navigate ambiguity. The computer would be an extension of human intelligence, not its successor.
His core insight was that processing information and creating understanding are different things. A machine can analyze a library of data; only a human can decide what it means. That distinction is where journalism finds its modern purpose.
The Activity vs. The Function
Too often, we confuse the activity of journalism with its function. The activity involves the craft: reporting, writing, editing, publishing. The function is the purpose: to help society make sense of itself.
AI is already proving it can optimize the activity. It can transcribe interviews, analyze documents, and draft summaries with incredible efficiency. But the function—weighing competing values, navigating ethical complexity, and translating abstract data into lived meaning—remains an exclusively human domain. An investigative report that changes your mind presents information and builds a framework for understanding its significance. Information processing differs fundamentally from interpretation.
The Stagnation Paradox and the Architect's Warning
This technological moment is defined by a strange paradox. Venture capitalist and early Trump backer Peter Thiel argues that since the 1970s, our society has been in a "great stagnation," with meaningful progress largely confined to the "world of bits." Yet within that digital world, the last few years have felt like a breathtaking explosion. As Eugene Charniak documents in "AI & I: An Intellectual History of Artificial Intelligence," beginning in 2017, the AI field ignited:
Scale brought models with hundreds of billions of parameters
Imagery saw diffusion models turn text into art
Biology was cracked when AlphaFold solved the protein folding problem
Conversation became uncannily human with ChatGPT
Just as the explosion reached its peak, a crucial warning came from within. Charniak notes that AI's greatest advances often come from recognizing the limitations of previous approaches. The transformer revolution that began in 2017 seemed to solve fundamental problems in language understanding, but Yann LeCun—whose contributions to neural network theory enabled this breakthrough—now argues it represents a conceptual dead end. His prediction that current models will be "largely obsolete within five years" reflects the kind of internal paradigm critique that Charniak's historical analysis suggests precedes major shifts in the field. The statistical approach LeCun helped establish may be encountering the same kind of fundamental limits that earlier symbolic approaches faced.
What Statistics Can and Cannot Capture
LeCun's warning points to the central limitation of modern AI: it excels at statistics, not truth. These systems identify correlations in language brilliantly, but they cannot grasp causation.
Peter Thiel offers a more provocative diagnosis. He challenges the Silicon Valley ideology that more intelligence is always the answer, arguing that our real problem is a societal inability to use the intelligence we already have.
On Ross Douthat's Interesting Times podcast, Thiel made the economics argument that people actually perform worse the smarter they are—not because intelligence fails, but because our society struggles to apply it effectively. If a society cannot harness the wisdom of its most brilliant humans, why would more powerful, non-human intelligence solve its problems? The bottleneck lies not in processing power but in institutional courage and social wisdom. This is where journalism operates. An AI can process data on corruption, but only human reporting can build a societal framework for why it matters and what must be done.
The Symbiosis Alternative
This landscape—explosive progress hitting fundamental walls—makes Licklider's symbiotic vision more relevant than ever. The very technique that made ChatGPT successful, Reinforcement Learning from Human Feedback, exemplifies this partnership: human reviewers provided judgment to guide machine processing.
This is the model for journalism: use AI to analyze datasets, but rely on reporters to understand their meaning. Use algorithms to find trends, but depend on editors to decide why they matter.
The Strategic Choice
The choice for media leaders goes beyond technical decisions—it's fundamentally about purpose. Does journalism use AI's efficiency to become another cog in a stagnant information culture? Or does it use these tools to reclaim its role as society's sense-making engine?
Licklider's dream was to augment human intelligence, not replace it. The question now is whether we will use these machines to deepen our understanding of the world, or to retreat from it.
The machines that can process information are already here. The humans who can create understanding from that information have never been more essential. Journalism's greatest opportunity in the AI era lies not in competing with algorithms, but in doing what only humans can do: help people navigate the paradoxes of our time.
What matters most now is that journalists and newsroom leaders become AI fluent—both for reporting on AI's impact and for working effectively with these tools. For those ready to begin this journey, I’ve compiled a comprehensive AI Learning Guide for Journalists with courses, resources, and practical implementation strategies. Feel free to reach out with questions or suggestions also.
I would also like to highlight two fellowship and training opportunities that have deadlines coming up. These are suitable for newsroom colleagues who have already dedicated some time to this field.
AI Learning & Fellowship Opportunities
An Editor's Guide to AI Course
For managing and executive editors covering AI story assessment, framing, and avoiding common coverage pitfalls.
Deadline: July 31
AI Accountability Fellowships - Pulitzer Center
Supports journalists investigating how governments and corporations use AI in policing, medicine, hiring, and social services.
Deadline: August 11
AlgorithmWatch Reporting Fellowship
Research fellowship examining the relationship between artificial intelligence and power structures.
Deadline: September 20
Reply with your thoughts or email me directly at aliasad.mahmood@gmail.com—I'm still finding my voice and genuinely want your feedback.
Image generated using AI to illustrate concepts discussed in this article.