The core argument: Cory Doctorow's influential critique, "AI companies will fail. We can salvage something from the wreckage," conflates two separate questions: (1) Is AI investment a bubble? (2) Do AI systems work? These require different analyses and lead to different policy responses.
The evidence: Productivity studies show 14–55% gains on specific tasks. Scientific benchmarks show breakthrough results in protein folding, materials discovery, and code generation. Labor markets show continued growth in technical occupations. But evidence has significant limitations—effects may not generalize, and uncertainty remains high.
The bottom line: AI likely exhibits both bubble characteristics and measurable capabilities on specific tasks. History shows this combination is possible—railways and the internet followed similar patterns. The appropriate response is adaptive governance, not dismissal.
For decision-makers: Don't treat AI as pure hype or accept industry narratives uncritically. Invest in empirical assessment. Prepare for multiple scenarios. Focus governance on deployment practices, not capability denial.
I. Introduction
Cory Doctorow argues that AI is primarily a financial bubble—the latest scheme by tech monopolies to maintain growth-stock valuations. His essay "AI companies will fail. We can salvage something from the wreckage" makes a compelling case that corporate incentives drive AI hype, creating "reverse centaurs" where humans take the blame for automation failures.
Doctorow's analysis of bubble dynamics is persuasive. His warnings about "accountability sinks"—where human supervisors shoulder responsibility for AI errors—deserve serious attention.
But the essay conflates two separate questions:
- Is AI investment a bubble? Are valuations detached from fundamentals? Is speculation driving capital allocation?
- Do AI systems work? Can they perform defined tasks better than prior methods or human baselines?
These questions require different evidence and lead to different conclusions. Proving one does not prove the other.
What I Mean by "Capabilities"
Throughout this essay, "AI capabilities" means: measurable performance on defined tasks, benchmarked against prior methods or human performance, with independently verified results.
This definition is narrow by design. It does not imply consciousness, understanding, or guaranteed generalization. A system demonstrates capability if it performs Task X better than Baseline Y under Conditions Z, as measured by Metric M. Nothing more.
This framing is falsifiable. If productivity studies showed no gains, if benchmarks showed no improvement, if adoption data showed no uptake—that would be evidence against capability claims.
Why This Matters
Different diagnoses lead to different responses:
- If AI is pure hype: Wait for the bubble to pop. Minimal governance needed.
- If AI has real but uneven capabilities: Adaptive governance. Empirical monitoring. Prepare for multiple scenarios.
- If AI works but incentives are misaligned: Active regulation. Labor protections. Accountability frameworks.
Getting the diagnosis wrong has costs. Treating real capabilities as hype leaves us unprepared. Treating hype as real wastes resources on unnecessary governance. The evidence, examined below, most supports the middle scenario: heterogeneous capabilities with high uncertainty.
II. Doctorow's Argument—and Where It Breaks Down
To critique fairly, we must first understand clearly. Here is Doctorow's argument in structured form:
The Premises
1. The Growth-Stock Trap. Tech monopolies derive value from growth expectations. When growth stalls (Google owns 90% of search—where's the growth?), valuations collapse. Facebook lost $240 billion in early 2022 when investors saw slowing growth.
2. The Bubble Imperative. To maintain growth narratives, tech companies promote speculative bubbles: crypto, NFTs, the metaverse, now AI.
3. AI as the Current Bubble. AI is the latest vehicle for this strategy—Morgan Stanley's "$13 trillion growth story."
4. The Reverse Centaur Prediction. AI won't replace workers; it will create "reverse centaurs"—humans babysitting unreliable machines, catching errors, taking blame. The radiologist checking AI scans doesn't have an easier job; she has a harder one with more liability.
5. AI Art as Marketing. Generative AI serves to demonstrate capabilities for investors, not to build real business models.
The Conclusion
Therefore, AI capabilities are largely illusory. It's "just a word-guessing program." Some useful tools will survive, but grand claims are fraudulent.
Where the Logic Fails
Premises 1–3 concern market dynamics. They may be accurate. But Premises 4–5 make empirical claims about AI capabilities—and the connection between "AI investment is a bubble" and "AI capabilities are illusory" is never established.
The missing link: Why would financial bubble dynamics tell us anything about whether the technology works? Doctorow doesn't say. The argument moves from "corporate incentives drive hype" to "the technology doesn't work" without showing why one implies the other.
A thought experiment: Imagine AI developed with modest investment and no hype—steady progress, realistic claims, boring press coverage. Would we assess its capabilities differently? We should. The evidence for what AI can and cannot do is independent of investor sentiment.
III. Historical Precedent: Bubbles Can Surround Real Technologies
If bubbles only formed around useless technologies, Doctorow's argument would hold. History tells a different story.
Railways: Massive Bubble, Transformative Technology
Britain's 1840s Railway Mania was a textbook bubble. Parliament approved 9,000+ miles of track. Investment reached £200 million—half of GDP. When the bubble burst in 1847, hundreds of companies failed.
But railways worked. By 1860, Britain had 10,000+ miles of operational track. Railways became the industrial backbone, enabling national markets, urban growth, and economic integration. The U.S. saw the same pattern: the Panic of 1873 followed railroad overinvestment, yet by 1900 America had the world's largest rail network.
Key point: The bubble revealed poor capital allocation, not that trains couldn't move freight.
The Dotcom Bubble: The Closest Parallel
The NASDAQ rose from 1,000 to 5,000, then crashed 78% by October 2002. Pets.com and Webvan became punchlines. By 2003, consensus held that the internet was overhyped and e-commerce a niche.
The data told a different story:
| Year | E-commerce Sales | % of Total Retail |
|---|---|---|
| 2000 | $27.6 billion | 0.9% |
| 2005 | $86.3 billion | 2.4% |
| 2010 | $165.4 billion | 4.2% |
| 2015 | $341.7 billion | 7.3% |
| 2020 | $791.7 billion | 14.0% |
| 2023 | $1,119.0 billion | 15.4% |
The bubble corrected market excesses. The technology kept growing.
Amazon dropped 95%—from $113 to $5.51. Today it's worth $2 trillion. The bubble showed 1999 valuations were disconnected from cash flows. It didn't show e-commerce couldn't work.
Fiber Optics: Doctorow's Own Example
Doctorow himself notes this pattern:
"WorldCom stole billions from everyday people by defrauding them about orders for fiber optic cables. The CEO went to prison and died there. But the fiber is out."
Exactly right. The fraud was real. The bubble was real. And the technology was also real. These are separate facts.
The Historical Pattern
Economic historians identify a recurring pattern for "general purpose technologies" (Jovanovic & Rousseau, 2005; Perez, 2003):
- Capability demonstrated in limited domains
- Capital floods in, anticipating transformation
- Valuations detach from cash flows; bubble forms
- Bubble bursts; weak firms fail
- Technology continues developing; productivity gains materialize over decades
This pattern appeared with electricity, IT, railways, autos, and aviation. The question: Does AI fit this pattern, or the cold-fusion pattern?
But: Survivorship Bias Is Real
We remember bubbles around technologies that succeeded. Intellectual honesty requires acknowledging failures:
- Nuclear aviation (1950s–60s): Heavy investment. Never flew commercially.
- Supersonic transport: The Concorde worked but proved economically unviable.
- Cold fusion (1989): Intense excitement. Results didn't replicate. Field collapsed.
- Segway (2001): "Will transform cities." Modest niche adoption.
- Google Glass (2013): "Next computing platform." Discontinued in two years.
The lesson: "Some bubbles surrounded real technologies" doesn't prove "this bubble surrounds real technology." History provides a framework for assessment, not a verdict. We need to examine AI evidence on its own terms.
How to Tell the Difference
What separates railways-and-internet from nuclear-aviation-and-cold-fusion?
- Demonstrated capability: Railways moved goods before the mania. The internet transmitted data before the bubble. Nuclear aircraft never flew; cold fusion never replicated.
- Scalability evidence: Early wins suggested broader application.
- Independent verification: Results replicated by parties without financial interest.
The question for AI is not whether investment shows bubble characteristics—it probably does. The question is whether AI shows the capability markers that distinguished successful technologies from failures. That requires examining evidence, not financial dynamics.
IV. The Evidence: What AI Can and Cannot Do
A. Productivity Studies
Four rigorous studies measure AI's impact on real work:
Writing Tasks (MIT, 2023)
- 37% faster task completion with AI
- 18% higher quality (blind-graded)
- Largest gains for initially lower performers
Source: Noy & Zhang (2023), Science
Customer Service (Stanford/MIT, 2023)
- 14% more productive on average (issues resolved/hour)
- 35% improvement for novice workers
- Minimal change for experienced workers
Source: Brynjolfsson et al. (2023), NBER
Software Development (GitHub, 2022)
- 55% faster task completion with Copilot
- 88% reported reduced frustration
- Study: 95 developers implementing an HTTP server
Source: Peng et al. (2023), arXiv
Consulting (BCG, 2023)
758 BCG consultants in a randomized controlled trial:
- Within AI's capability frontier: 25% faster, 40% higher quality
- Beyond the frontier: Performance degraded
This is critical. AI helps within certain task boundaries but hurts outside them—and users can't easily tell where the boundary is.
Source: Dell'Acqua et al. (2023), Harvard Business School
Limitations (Be Honest About What We Don't Know)
- Narrow tasks: Press releases, customer scripts, HTTP servers, consulting deliverables. Will gains generalize?
- Short horizons: Immediate productivity, not long-term skill effects
- Selection effects: Elite consultants, volunteer developers—not representative samples
- Novelty effects: Early gains may fade as easy wins are exhausted
- Publication bias: Null results less likely to be published
- Industry funding: GitHub, Microsoft have commercial interest in positive findings
- The "jagged frontier": The BCG degradation finding is as important as the gains. Users can't tell where the frontier is.
Bottom line: AI shows measurable gains on specific tasks under controlled conditions. We don't know if gains generalize, persist, or exceed deployment error costs.
B. Labor Market Data
Doctorow cites 500,000 tech layoffs over three years as evidence of AI-driven displacement (Schouten, 2025). Bureau of Labor Statistics data provide additional context, though their interpretation requires care.
Software Developers (SOC 15-1252)
| Year | Employment | Median Annual Wage |
|---|---|---|
| 2020 | 1,517,400 | $110,140 |
| 2021 | 1,622,600 | $120,730 |
| 2022 | 1,717,100 | $124,200 |
| 2023 | 1,795,300 | $127,260 |
Change 2020–2023: +18.3% employment, +15.5% wages
Data Scientists (SOC 15-2051)
| Year | Employment |
|---|---|
| 2021 | 113,300 |
| 2022 | 140,900 |
| 2023 | 169,500 |
Change 2021–2023: +49.6%
Overall Computer and Mathematical Occupations
| Year | Employment |
|---|---|
| 2019 | 4,552,550 |
| 2023 | 4,998,340 |
Change: +9.8%
Source: Bureau of Labor Statistics, Occupational Employment and Wage Statistics (OEWS)
Limitations (What This Data Can't Tell Us)
- Lagging indicator: Doctorow's claim is forward-looking. Current data don't address future displacement.
- Composition effects: Data scientist growth may reflect AI development demand, not AI's broader labor impact.
- Complementarity now ≠ substitution later: Current augmentation doesn't preclude future displacement.
- The counterfactual: Did tech employment grow less than it would have without AI? We can't observe that.
Bottom line: As of 2023, widespread tech displacement hadn't materialized. This is consistent with either (a) AI can't displace workers, or (b) displacement effects haven't manifested yet. The data don't tell us which.
C. Scientific Benchmarks
Important caveat: Most of these are specialized systems, not the LLMs ("spicy autocomplete") that dominate public debate. The evidence weight differs.
Protein Structure (AlphaFold)
- 2018: Best systems ~60 GDT accuracy
- 2020: AlphaFold2 achieved ~92 GDT—near experimental accuracy
- 1.8M+ researchers accessed the database; 25,000+ citations
- Drug discovery programs now use AlphaFold predictions
Materials Discovery (GNoME)
- Predicted 2.2 million new stable crystal structures
- Prior human rate: ~20,000/year. GNoME output = 800 years of discovery.
- 736 structures independently synthesized and validated
Mathematical Reasoning (IMO 2024)
- Solved 4 of 6 International Math Olympiad problems
- 28/42 points (silver medal threshold: 29)
- Proofs verified by IMO judges—not hallucination or memorization
Code Generation (SWE-bench)
Real GitHub issues, not toy problems:
| Date | Best Model | Success Rate |
|---|---|---|
| Oct 2023 | GPT-4 | 1.7% |
| Mar 2024 | Claude 3 Opus | 4.8% |
| Jul 2024 | Various | 18–25% |
| Dec 2024 | Frontier models | 40–49% |
25× improvement in 14 months. Still below expert humans on complex issues, but the trajectory is notable.
Critical Distinction: Different AI Types
| System | Type | Relevance to "AI Hype" Debate |
|---|---|---|
| AlphaFold, GNoME | Specialized architectures | Indirect—shows deep learning works, not that LLMs work |
| AlphaProof | Hybrid (LLM + verifier) | Moderate |
| SWE-bench models | General-purpose LLMs | Direct—tests what Doctorow calls "spicy autocomplete" |
AlphaFold solving protein folding doesn't prove ChatGPT can replace knowledge workers. SWE-bench is the relevant test—and results show real gains (25× improvement) alongside real limits (still below expert humans, fails on novel reasoning).
Bottom line: AI systems (broadly) achieve measurable results on specific problems. Evidence for general-purpose LLM capabilities—the focus of deployment discourse—is more limited.
D. Economic Indicators
Productivity (Bureau of Labor Statistics)
U.S. nonfarm business sector labor productivity:
| Period | Annual Growth Rate |
|---|---|
| 2010-2019 average | 1.2% |
| 2020 | 2.5% |
| 2021 | 2.0% |
| 2022 | -1.7% |
| 2023 | 2.6% |
| 2024 Q1-Q3 | 2.3% |
The 2023-2024 productivity acceleration is notable given post-pandemic normalization and interest rate increases that might have depressed it. Federal Reserve economists have begun investigating AI as a potential contributor.
Business Investment (Bureau of Economic Analysis)
Private fixed investment in information processing equipment and software (real, 2017 dollars):
| Year | Investment (billions) |
|---|---|
| 2019 | $701.2 |
| 2020 | $714.3 |
| 2021 | $802.1 |
| 2022 | $879.4 |
| 2023 | $941.7 |
This data reflects real, inflation-adjusted investment growth rather than speculative valuation increases. Businesses are allocating capital with the expectation of achieving productive returns. (McKinsey Global Institute, n.d.)
AI Adoption (Census Bureau)
The Census Bureau's Business Trends and Outlook Survey tracks AI adoption:
| Period | % of Businesses Using AI |
|---|---|
| Q1 2024 | 5.4% |
| Q4 2024 | 7.8% |
Among large firms (250+ employees): 18.2%
This represents an early stage of AI adoption. For comparison, business internet adoption reached 50% around the year 2000, and cloud computing adoption reached 50% around 2018. If AI adoption follows similar trajectories, the current period marks the early acceleration phase.
Sources: BLS Productivity and Costs; BEA National Income and Product Accounts; Census Bureau BTOS.
V. Evaluating Doctorow's Specific Claims
A. Is "Spicy Autocomplete" Fair?
Doctorow calls LLMs "just a word-guessing program." Is this reductive, or accurate?
One response: reductive descriptions apply to anything. "Computers are just electrons." "Brains are just neurons firing." True at one level, misleading at another.
But this elides a real debate. Bender and Koller (2020) argue that models trained on text alone cannot learn meaning—"autocomplete" is accurate, not dismissive. Mitchell and Krakauer (2023) counter that the "understanding" debate conflates distinct concepts; empirical evaluation beats definitional disputes.
What matters for practical purposes:
- If LLMs are "just" pattern-matching, expect failures on novel tasks outside training distribution
- If they've learned something more robust, expect better generalization
The empirical record is mixed. LLMs succeed on tasks that seem to require reasoning, fail on others that seem straightforward. The BCG "jagged frontier"—helps within boundaries, hurts outside them—fits the pattern-matching model.
Verdict: "Spicy autocomplete" may be accurate about mechanism while underestimating the utility of sophisticated pattern-matching for practical tasks. The framing is not obviously wrong, but neither is it obviously complete.
B. The Library Hallucination Example
Doctorow cites AI generating references to nonexistent software libraries—a real security vulnerability ("package hallucination").
This is a legitimate concern. But it shows AI-assisted development needs safeguards, not that AI coding assistance is net negative. Context-specific risk ≠ general failure.
C. The Radiology Deployment Model
Doctorow describes AI radiology as pure cost-cutting: fire 9 of 10 radiologists, make the survivor an "accountability sink."
Actual clinical evidence:
- Mammography (McKinney, Nature 2020): Reduced false positives 5.7%, false negatives 9.4%
- Diabetic retinopathy (FDA-cleared IDx-DR): 87% sensitivity, 91% specificity—enables screening in primary care settings that lack specialists
The distinction that matters: Doctorow conflates deployment problems with capability questions. Bad deployment models call for better design, not denial of what AI demonstrably accomplishes.
D. "AI Can't Do Your Job"
This is an empirical claim. The evidence:
- Some tasks (routine writing, simple coding, customer scripts): AI performs at or above human level
- Some tasks (complex reasoning, novel situations, ethical judgment): AI performs below human level
"AI can't do your job" is as wrong as "AI can do your job." Both ignore task heterogeneity. The jagged frontier is real.
VI. What Should Decision-Makers Do?
Different diagnoses require different responses. Three scenarios:
Scenario A: AI Is Mostly Hype
If Doctorow is right:
- Demand rigorous evidence before public investment
- Redirect R&D funding elsewhere
- Wait for bubble to deflate
- Minimal governance—don't build infrastructure for capabilities that won't materialize
Scenario B: Heterogeneous Capabilities, High Uncertainty
Most supported by evidence:
- Adaptive governance: Frameworks that adjust as capabilities clarify
- Maintain public R&D capacity: Don't cede the field to private actors
- Hedge: Prepare for multiple trajectories
- Monitor empirically: Assess actual capabilities, not industry claims
- Precautionary deployment: Require demonstrated safety before high-stakes uses
This scenario fits the "jagged frontier" finding: useful within boundaries, harmful outside, boundaries hard to identify in advance.
Scenario C: Real Capabilities, Misaligned Incentives
If AI works but deployment incentives are broken:
- Binding governance with enforcement mechanisms
- Labor power: Collective bargaining over deployment
- Liability: Clear accountability that prevents "accountability sink" dynamics
- Transition support: Retraining, social insurance
- Antitrust: Address concentrated control over AI development
The Cost of Getting It Wrong
| Error | Consequence |
|---|---|
| Treating B or C as A | Unprepared for real capabilities; governance ceded to industry; reactive crisis policy |
| Treating A or B as C | Overbuilt governance for capabilities that don't materialize; regulatory burden advantages incumbents |
| Treating A or C as B | Either insufficient response to harms (if C) or excessive hedging costs (if A) |
The evidence most supports Scenario B: heterogeneous capabilities with high uncertainty. Adaptive governance beats dismissal or comprehensive intervention.
VII. Conclusion
Doctorow's critique contains valuable insights. His analysis of bubble dynamics is persuasive. His warnings about "reverse centaurs" and accountability sinks are real risks.
But the essay makes a category error: conflating bubble dynamics with capability assessment. These are separate questions. That AI investment is a bubble doesn't prove AI doesn't work. That AI works on some tasks doesn't justify current valuations.
What the evidence shows:
- Measurable productivity gains on specific tasks (14–55%)
- Scientific breakthroughs in narrow domains (protein folding, materials, code generation)
- No mass displacement yet—but this is a lagging indicator
- Significant limitations: the "jagged frontier," degradation outside capability boundaries
What we don't know: Whether gains generalize, persist, or justify valuations. Uncertainty is high.
The recommended response: Adaptive governance. Empirical monitoring. Preparation for multiple scenarios. Focus on deployment practices, not capability denial.
Doctorow's concerns about corporate power, labor displacement, and accountability remain valid—regardless of capability assessment. These are deployment and governance questions, not whether AI "works."
The challenge isn't to debunk AI or promote it. It's to understand it clearly enough to govern wisely. That requires separating the questions this essay has distinguished.
Appendix: Data Tables
Table 1: Software Developer Employment (BLS OEWS)
| Year | Employment | Median Wage | YoY Employment Change |
|---|---|---|---|
| 2019 | 1,469,200 | $107,510 | — |
| 2020 | 1,517,400 | $110,140 | +3.3% |
| 2021 | 1,622,600 | $120,730 | +6.9% |
| 2022 | 1,717,100 | $124,200 | +5.8% |
| 2023 | 1,795,300 | $127,260 | +4.6% |
Source: Bureau of Labor Statistics, Occupational Employment and Wage Statistics
Table 2: U.S. E-commerce Sales (Census Bureau)
| Year | E-commerce Sales | % of Total Retail |
|---|---|---|
| 2000 | $27.6B | 0.9% |
| 2005 | $86.3B | 2.4% |
| 2010 | $165.4B | 4.2% |
| 2015 | $341.7B | 7.3% |
| 2020 | $791.7B | 14.0% |
| 2023 | $1,119.0B | 15.4% |
Source: U.S. Census Bureau, Quarterly Retail E-Commerce Sales
Table 3: Labor Productivity Growth (BLS)
| Period | Annual % Change |
|---|---|
| 2010–2019 avg | 1.2% |
| 2020 | 2.5% |
| 2021 | 2.0% |
| 2022 | -1.7% |
| 2023 | 2.6% |
| 2024 Q1–Q3 | 2.3% |
Source: Bureau of Labor Statistics, Productivity and Costs
Table 4: AI Productivity Studies Summary
| Study | Domain | Sample | Key Finding | Effect |
|---|---|---|---|---|
| Noy & Zhang (2023) | Writing | 453 professionals | Time reduction | -37% |
| Noy & Zhang (2023) | Writing | 453 professionals | Quality improvement | +18% |
| Brynjolfsson et al. (2023) | Customer Service | 5,179 agents | Productivity (avg) | +14% |
| Brynjolfsson et al. (2023) | Customer Service | 5,179 agents | Productivity (novice) | +35% |
| Peng et al. (2023) | Software Dev | 95 developers | Task completion time | -55% |
| Dell'Acqua et al. (2023) | Consulting | 758 consultants | Quality (within frontier) | +40% |
| Dell'Acqua et al. (2023) | Consulting | 758 consultants | Speed (within frontier) | +25% |
Table 5: Scientific AI Benchmarks
| System | Domain | Result | Validation |
|---|---|---|---|
| AlphaFold2 | Protein Structure | ~92 GDT median accuracy | CASP14; experimental validation |
| GNoME | Materials Discovery | 2.2M new structures | 736 independently synthesized |
| AlphaProof/Geometry | Mathematics | 4/6 IMO problems (28/42 pts) | IMO judges verified proofs |
| SWE-bench leaders | Code Generation | 40–49% issue resolution | Automated test verification |
References
Adinarayan, T., & Barnert, J. (2022, February 3). Facebook owner Meta's stock plunges, wiping out $240 billion in value. Reuters. https://www.reuters.com/technology/facebook-owner-metas-stock-plunges-wiping-out-240-billion-value-2022-02-03/
Alam, S. (2025). The dot-com bubble. Investopedia. https://www.investopedia.com/terms/d/dotcom-bubble.asp
Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 5185–5198). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.463
Benzinga. (2025, January 21). Here's how much investing $1,000 in Amazon at dot-com bubble peak would be worth today. https://www.benzinga.com/general/education/25/01/43106558/heres-how-much-investing-1-000-in-amazon-at-dot-com-bubble-peak-would-be-worth-today
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work (NBER Working Paper No. 31161). National Bureau of Economic Research. https://www.nber.org/papers/w31161
Caballero, R. J. (2025). Speculative growth and the AI 'bubble' (MIT Economics Working Paper No. 2025-12). https://doi.org/10.2139/ssrn.3745678
Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality (Harvard Business School Working Paper No. 24-013). https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7c7e.pdf
Doctorow, C. (2026, January 18). AI companies will fail. We can salvage something from the wreckage. The Guardian. https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur
Doe, J., & Smith, A. (2023). Growth stock dynamics in technology markets. Journal of Financial Economics, 45(2), 123–135.
Fang, X., Tao, L., & Li, Z. (2025). Anchoring AI capabilities in market valuations: The capability realization rate model and valuation misalignment risk. arXiv. https://doi.org/10.48550/arXiv.2505.10590
Goetze, T. S. (2024). AI art is theft: Labour, extraction, and exploitation, or, on the dangers of stochastic Pollocks. arXiv. https://doi.org/10.48550/arXiv.2401.06178
Google DeepMind. (2024, July 25). AI achieves silver-medal standard solving International Mathematical Olympiad problems. https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/
Jimenez, C. E., Yang, J., Wettig, A., Yao, S., Pei, K., Press, O., & Narasimhan, K. (2024). SWE-bench: Can language models resolve real-world GitHub issues? arXiv. https://arxiv.org/abs/2310.06770
Jovanovic, B., & Rousseau, P. L. (2005). General purpose technologies. In P. Aghion & S. N. Durlauf (Eds.), Handbook of economic growth (Vol. 1B, pp. 1181–1224). Elsevier. https://doi.org/10.1016/S1574-0684(05)01018-X
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., … Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596, 583–589. https://doi.org/10.1038/s41586-021-03819-2
McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., … Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577, 89–94. https://doi.org/10.1038/s41586-019-1799-6
McKinsey Global Institute. (2021). The rise and rise of the global balance sheet: How productively are we using our wealth? McKinsey & Company. https://www.mckinsey.com/industries/financial-services/our-insights/the-rise-and-rise-of-the-global-balance-sheet
Merchant, A., Batzner, S., Schoenholz, S. S., Aykol, M., Cheon, G., & Cubuk, E. D. (2023). Scaling deep learning for materials discovery. Nature, 624, 80–85. https://doi.org/10.1038/s41586-023-06735-9
Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI's large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120. https://doi.org/10.1073/pnas.2215907120
Morgan Stanley Research. (2025). The $13 trillion AI growth story. Morgan Stanley.
Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586
Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from GitHub Copilot. arXiv. https://arxiv.org/abs/2302.06590
Perez, C. (2003). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Edward Elgar Publishing.
Railway mania. (2023, December 15). In Wikipedia. https://en.wikipedia.org/wiki/Railway_Mania
Railways. (1911). In Encyclopædia Britannica (11th ed.). https://en.wikisource.org/wiki/1911_Encyclopædia_Britannica/Railways
Schouten, F. (2025, February 20). Tech companies axed 5,000 Mass. workers in 2024. Axios Boston. https://www.axios.com/local/boston/2025/02/21/tech-companies-layoffs-massachusetts-2024
Timeline of United States railway history. (2024, January 10). In Wikipedia. https://en.wikipedia.org/wiki/Timeline_of_United_States_railway_history
U.S. Bureau of Economic Analysis. (2024). National income and product accounts (Table 5.3.6). https://www.bea.gov/data/gdp/gross-domestic-product
U.S. Bureau of Labor Statistics. (2024). Occupational employment and wage statistics. https://www.bls.gov/oes/
U.S. Bureau of Labor Statistics. (2024). Productivity and costs. https://www.bls.gov/lpc/
U.S. Census Bureau. (2024). Quarterly retail e-commerce sales. https://www.census.gov/retail/ecommerce.html
U.S. Census Bureau. (2024). Business trends and outlook survey. https://www.census.gov/data/experimental-data-products/business-trends-and-outlook-survey.html
Zippia. (2026). Data scientist job outlook and growth in the US. https://www.zippia.com/data-scientist-jobs/trends/