(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Abstraction, AI, and the Leaky Future of Learning [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-06-14
Abstraction, AI, and the Leaky Future of Learning
A Critical Summary and Evaluation of Brad DeLong’s Substack Essay
In his April 29, 2025, Substack post, economist J. Bradford DeLong reflects on the challenges and opportunities artificial intelligence (AI) poses to higher education and cognitive labor.¹ Drawing on essays by Johan Fourie and Steven Sinofsky, DeLong positions large language models (LLMs) not as brains, but as tools—layers of abstraction that enhance and obscure cognition in equal measure. This essay summarizes DeLong’s argument and evaluates its strengths and weaknesses, particularly in the context of pedagogy, epistemology, and technological ethics.
AI, Education, and the End of the Essay
Johan Fourie, writing in Our Long Walk, documents a shift in student behavior. Class attendance plummeted to 20% as students increasingly relied on AI to generate essays in under ten minutes.² He proposes abandoning traditional assessments in favor of training students in the scientific method—how to inquire, reason, and persuade. Rather than panic over declining outputs, Fourie urges educators to rethink their goals: not to certify knowledge but to cultivate intellectual frameworks.
DeLong echoes this pedagogical shift. Reflecting on his American Economic History syllabus, he outlines a set of guiding questions: Why did the U.S. economy develop as it did? What structural forces enabled its rise, and what institutional failures produced inequality and crisis?³ The deeper lesson, DeLong argues, lies not in memorizing answers but in learning to construct them. A future-proof curriculum may need to focus on how to frame questions, test hypotheses, and argue persuasively—skills less vulnerable to AI automation.
Steven Sinofsky offers historical perspective. In Hardcore Software, he compares AI to previous cognitive tools—from typewriters to word processors to personal computers.⁴ Each abstraction generated resistance. When word processors emerged, faculty feared students would lose writing ability. They were wrong. Word processors eliminated drudgery and freed up intellectual space. AI, Sinofsky contends, is the next abstraction. It removes yet another layer of friction, allowing people to operate at a higher cognitive tier.
DeLong, however, introduces a critical caveat: all abstraction layers leak. Borrowed from software engineering, the Law of Leaky Abstractions (coined by Joel Spolsky) reminds us that while abstractions simplify complexity, they inevitably fail to contain it.⁵ When breakdowns occur—as they always do—users must understand the underlying mechanics to respond intelligently. In education, this means students must still engage with fundamentals. Those who outsource cognition entirely to AI may develop brittle intellectual frameworks—easy to use but difficult to adapt.
DeLong links this to philosopher Alfred North Whitehead’s “fallacy of misplaced concreteness,” where abstractions are mistaken for reality.⁶ Without the ability to interrogate what lies beneath, students risk taking surface representations as truth. Thus, the future of education must not only embrace AI tools but also teach their limits.
Evaluation: What DeLong and His Sources Get Right—and What They Miss
DeLong, Fourie, and Sinofsky articulate a compelling case for rethinking education in an AI-saturated world. They are correct that:
⁃ AI is Cognitive Infrastructure: Like calculators and compilers, AI tools extend cognitive reach. They do not eliminate thinking but redirect it. As Sinofsky observes, abstraction is not loss—it is evolution.⁴
⁃ The Essay is Dying, but Thinking Endures: Fourie rightly sees the traditional essay as obsolete, yet he reframes this loss as an opportunity to emphasize process over product.²
⁃ Teaching Method over Memoranda: DeLong’s emphasis on inquiry and persuasion—how to ask, explore, and convince—is timely and well-founded.³
⁃ Abstractions Must Be Respected and Resisted: DeLong’s invocation of the leaky abstraction principle is crucial. Simplifications are necessary, but without fluency in their underlying systems, students (and societies) become vulnerable.⁵
However, there are significant limitations to their arguments:
⁃ Self-Directed Learning is Uneven: The optimistic vision presumes a high degree of intrinsic motivation and capability. Many students require structure, scaffolding, and external motivation. Replacing deliverables with open-ended inquiry may leave the least prepared further behind.
⁃ Assessment Still Matters: While Fourie critiques traditional grading, no clear alternative is offered. In large, diverse institutions, assessment provides a shared metric of accountability. Its wholesale removal risks incoherence.²
⁃ AI is Not Neutral: None of the authors sufficiently address the biases embedded in LLMs. These models reflect the structures, omissions, and assumptions of their training data. Treating them as universal tutors ignores the epistemic and cultural risks they carry.
⁃ Equity and Access are Ignored: The political economy of AI is largely absent from the discussion. Will elite students benefit from custom AI tutors while others are left with generic outputs? Will AI exacerbate educational stratification rather than democratize knowledge?
Conclusion
Brad DeLong’s essay is a timely and thoughtful reflection on the pedagogical implications of AI. His framework—supported by Fourie and Sinofsky—offers a productive reimagining of education centered on inquiry, abstraction, and intellectual resilience. Yet this framework must be extended. Without attention to equity, institutional feasibility, and the inherent politics of abstraction, the transformation he anticipates may reproduce the very problems it aims to solve. The task ahead is to embrace AI not as an oracle, but as a mirror—one that reveals both our cognitive capacities and their all-too-human constraints.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/6/14/2327997/-Abstraction-AI-and-the-Leaky-Future-of-Learning?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/