The Usefulness of “Useless” Knowledge (and Why AI Makes Flexner Even More Right)

I just finished reading The Usefulness of Useless Knowledge again, this time with the perspective of living through a period of rapid technological acceleration driven by AI. On an earlier reading, Flexner’s defense of curiosity-driven inquiry felt aspirational and almost moral in tone, a principled argument for intellectual freedom. On rereading, it feels more diagnostic. Many of the tensions he identified (i.e., between short-term utility and long-term understanding, between institutional incentives and genuine discovery) now play out daily in how we fund, evaluate, and deploy AI research. What has changed is not the structure of his argument, but its urgency: in a world increasingly optimized for immediate outputs, Flexner’s insistence that transformative advances often arise from questions with no obvious application reads less like an idealistic manifesto and more like a practical warning.

In 1939, on the eve of a world war, Abraham Flexner published a slim, stubbornly optimistic essay with a mischievous title: The Usefulness of Useless Knowledge. His claim is not that practical work is bad. It’s that the deep engine of civilization is often curiosity that doesn’t start with an application in mind, and that trying to force every idea to justify itself immediately is a reliable way to stop the next revolution before it begins.

Robbert Dijkgraaf’s companion essay (and related pieces written from his vantage point at the Institute for Advanced Study) updates Flexner’s argument for a world that is now built out of microelectronics, networks, and software; this is exactly the substrate on which modern AI sits. Reading them together today feels like watching two people describe the same phenomenon across two eras: breakthroughs are usually the delayed interest on “useless” questions.

Below is a guided tour of their core ideas, with a detour through the current AI moment, where “useless” knowledge is quietly doing most of the work.


Flexner’s central paradox: curiosity first, usefulness later

Flexner’s essay is a defense of a particular kind of intellectual freedom: the right to pursue questions without writing an ROI memo first.

Dijkgraaf highlights one of Flexner’s most quoted lines (and the one that best captures the whole stance): “Curiosity… is probably the outstanding characteristic of modern thinking… and it must be absolutely unhampered.”

That “must” is doing a lot of work. Flexner isn’t saying that applications are optional. He’s saying the route to them is often non-linear and hard to predict. He even makes the institutional point: a research institute shouldn’t justify itself by promising inventions on a timeline. Instead: “We make ourselves no promises… [but] cherish the hope that the unobstructed pursuit of useless knowledge” will matter later.

Notice the subtlety: he hopes it will matter, but he refuses to make that the official rationale. Why? Because if you only fund what looks useful today, you’ll underproduce the ideas that define tomorrow.


The “Mississippi” model of discovery (and why it matters for AI)

Flexner is unusually modern in how he describes the innovation pipeline: not as single geniuses striking gold, but as a long chain of partial insights that only later “click.”

He writes: “Almost every discovery has a long and precarious history… Science… begins in a tiny rivulet… [and] is formed from countless sources.”

This is basically an antidote to the myth that research can be managed like a factory. You can optimize a pipeline once you know what the pipeline is. But when you’re still discovering what questions are even coherent, “efficiency” often means “premature narrowing.”

AI is a perfect example of the Mississippi model. Modern machine learning is not one idea; it’s a confluence:

  • mathematical statistics + linear algebra,
  • optimization + numerical computing,
  • information theory + coding,
  • neuroscience metaphors + cognitive science,
  • hardware advances + systems engineering,
  • and now massive-scale data and infrastructure.

Much of that was, at some point, “not obviously useful” until it suddenly was.


Flexner’s warning: the real enemy is forced conformity

Flexner’s defense of “useless knowledge” is not only about technology; it’s about human freedom. He’s writing in a period where universities were being pushed into ideological service, and he argues that the gravest threat is not wrong ideas, but the attempt to prevent minds from ranging freely.

One of his sharpest lines: “The real enemy… is the man who tries to mold the human spirit so that it will not dare to spread its wings.”

If you read that in 2025, it lands uncomfortably close to modern pressures on research:

  • “Only fund what’s immediately commercial.”
  • “Only publish what’s trendy.”
  • “Only study what aligns with the current institutional incentive gradient.”
  • “Only build what can be shipped next quarter.”

And in AI specifically:

  • “Only do work that scales.”
  • “Only do benchmarks.”
  • “Only do applied product wins.”

Flexner isn’t anti-application; he’s anti-premature closure.


Dijkgraaf’s update: society runs on knowledge it can’t fully see anymore

Dijkgraaf’s companion essay takes Flexner’s stance and says, essentially: look around, Flexner won. The modern world is built out of the long tail of basic research.

He gives a crisp late-20th-century example: the World Wide Web began as a collaboration tool for particle physicists at CERN (introduced in 1989, made public in 1993). He ties that to the evolution of grid and cloud computing developed to handle scientific data, technology that now undergirds everyday internet services. Then he makes a claim that matters a lot for AI policy debates: fundamental advances are public goods (i.e., they diffuse beyond any single lab or nation).That’s an especially relevant lens for AI, where:

  • open ideas (architectures, optimization tricks, safety methods) propagate fast,
  • but compute, data, and deployment concentrate power.

If knowledge is a public good, then a society that starves basic research is quietly selling off its future, even if it still “uses” plenty of science in the present.


AI as a case study in “useful uselessness”

Here’s a helpful way to read Flexner in the age of AI:

A) “Useless” questions that became AI infrastructure

Many of the questions that shaped AI looked abstract or niche before they became inevitable:

  • How do high-dimensional models generalize?
  • When does overparameterization help rather than hurt?
  • What is the geometry of optimization landscapes?
  • How can representation learning capture structure without labels?
  • What are the limits of compression, prediction, and inference?

These don’t sound like product requirements. They sound like “useless” theory, until you realize they govern whether your model trains at all, whether it’s robust, whether it leaks private data, whether it can be aligned, and whether it fails safely.

Flexner’s point isn’t that every abstract question pays off. It’s that you can’t pre-identify the ones that will, and trying to do so narrows the search too early.

B) “Tool-making” is often the hidden payoff

Dijkgraaf emphasizes that pathbreaking research yields tools and techniques in indirect ways. (ias.edu)
AI progress has been exactly this: tool-making (optimizers, architectures, pretraining recipes, eval frameworks, interpretability methods, privacy-preserving techniques) that later becomes the platform everyone builds on.

C) The scary twist: usefulness for good and bad

Flexner also notes that discoveries can become instruments of destruction when repurposed. He uses chemical and aviation examples to make the point.

AI has the same dual-use character:

  • The same generative model family can draft medical summaries or automate phishing.
  • The same computer vision advances can improve accessibility or expand surveillance.
  • The same inference tools can find scientific patterns or extract sensitive attributes.

Flexner’s framework doesn’t solve dual-use, but it forces honesty: the ethical challenge isn’t a reason to stop curiosity; it’s a reason to pair curiosity with governance, norms, and safeguards.


A Flexnerian reading of the current AI funding wave

We’re currently living through a paradox that Flexner would recognize instantly:

  1. AI is showered with investment because it’s visibly useful now.
  2. That investment creates pressure to define “research” as whatever improves next quarter’s metrics.
  3. But the next conceptual leap in AI may come from areas that look “useless” relative to today’s dominant paradigm.

If you want better long-horizon AI outcomes (i.e., robustness, interpretability, privacy, security, alignment, and scientific discovery) Flexner would argue you need institutions that protect inquiry that isn’t instantly legible as profitable.

Or in his words, you need “spiritual and intellectual freedom.”


What to do with this (three practical takeaways)

1) Keep a portfolio: fast product work + slow foundational work

Treat research like an ecosystem. If everything must justify itself immediately, you get brittle progress. Flexner’s “no promises” stance is a feature, not a bug.

2) Reward questions, not only answers

Benchmarks matter, but they can also overfit the field’s imagination. Some of the most important AI work right now is about re-framing the question (e.g., what counts as “understanding,” what counts as “alignment,” what counts as “privacy,” what counts as “truthfulness”).

3) Build institutions that protect intellectual risk

Flexner designed the Institute for Advanced Study around the idea that scholars “accomplish most when enabled” to pursue deep work with minimal distraction.
AI needs its own versions of that: spaces where the incentive is insight, not velocity.


AI is not an argument against Flexner (it’s his exhibit A)

If you hold a smartphone, use a search engine, or interact with modern AI systems, you’re touching the compounded returns of yesterday’s “useless” knowledge.

Flexner’s defense isn’t sentimental. It’s strategic: a society that wants transformative technology must also want the conditions that produce it: freedom, patience, and room for ideas that don’t yet know what they’re for. Or, as Dijkgraaf puts it in summarizing Flexner’s view: fundamental inquiry goes to the “headwaters,” and applications follow, slowly, steadily, and often surprisingly.


Main Source: https://www.ias.edu/ideas/2017/dijkgraaf-usefulness

NaijaCoder at the University of Lagos (UNILAG)

Last year, NaijaCoder started hosting its Lagos camp at the University of Lagos (UNILAG). The Abuja camp started in 2022.

The University of Lagos (UNILAG) is a leading public research university in Lagos, Nigeria. It is often celebrated as “the University of First Choice and the Nation’s Pride.” Founded in 1962, UNILAG was established shortly after Nigeria’s independence as one of the country’s first generation universities. Over the past six decades, it has grown into one of Nigeria’s most prestigious institutions. I’ll briefly discuss UNILAG’s rich history and highlight recent NaijaCoder camps at UNILAG’s Artificial Intelligence and Robotics Lab (AirLab). The goal of this post is not to provide a comprehensive overview of UNILAG but to highlight NaijaCoder’s connections to the university.

A Brief History of UNILAG

UNILAG was established by an Act of Parliament in 1962 as an immediate response to the national need for a competent professional workforce to drive Nigeria’s social, economic, and political development. (At the time, I believe the Federal Capital Territory of Nigeria was still Lagos.) UNILAG opened its doors on October 22, 1962, starting with just 131 students, but rapidly expanded in scope and enrollment. By 1964, additional faculties such as Arts, Education, Engineering, and Science had been added to the original three faculties (Business & Social Studies, Law, and Medicine). This early growth set the stage for UNILAG’s transformation into a comprehensive university. Today, the university enrolls tens of thousands of students and operates across three campuses in Lagos: the main campus at Akoka in Yaba, the College of Medicine at Idi-Araba, and a smaller campus at Yaba for radiography. (I’m currently writing this post at the Akoka campus.)

From its inception, UNILAG played a critical role in Nigeria’s development. During the decades when Lagos was the nation’s capital, the university became a key intellectual hub influencing politics and public policy. Its student body was notably diverse and cosmopolitan, attracting talent from different regions and economic backgrounds, which helped cultivate a generation of educated Nigerians poised to lead in various sectors. Over the years, UNILAG has weathered challenges, such as economic downturns in the 1980s that strained facilities and led to some brain drain, but it rebounded by expanding revenue streams, improving its academic reputation, and drawing in more students. By 2011, enrollment had grown to over 39,000, a far cry from the 131 pioneer students in 1962. In recent times, student population figures have exceeded 57,000 annually, reflecting UNILAG’s status as one of Nigeria’s largest and most in-demand universities. In fact, it is one of the country’s most competitive schools for admissions.

University leadership has made it clear that research and innovation are at the heart of UNILAG’s future trajectory. Professor Folasade Ogunsola, who became UNILAG’s first female Vice-Chancellor in 2022, has articulated a vision to make UNILAG a “future-ready, research-oriented and enterprise-driven hub.” She introduced a strategic framework with the acronym “FIRM” – focusing on Financial re-engineering, Infrastructural development, Reputation building through teaching/research/innovation, and Manpower development. A major part of this vision, it seems, is strengthening research output and global partnerships (through initiatives like NaijaCoder partnerships).

NaijaCoder Camps at UNILAG AIRLab (2024 & 2025) – Empowering the Next Generation

NaijaCoder is a non-profit organization dedicated to teaching algorithms to young Nigerians. In the summers of 2024 and 2025, UNILAG’s AIRLab (AI and Robotics Lab) partnered with NaijaCoder to run intensive camps in Lagos. Prof. Chika Yinka-Banjo is the director of the AIRLab; she has been instrumental in bringing the program to Lagos, from initial recruiting to daily daily logistics to TA recruitment.

In Summer 2024, the Lagos NaijaCoder camp was held right on UNILAG’s campus in collaboration with the AIRLab. For 14 days, about 50 students (mostly teens) immersed themselves in learning the basics of algorithms at the UNILAG AI & Robotics Lab. The curriculum introduced the participants to core concepts in an accessible way. The camp instructors covered everything from basic Python programming syntax and data types, to loops and recursion, searching and sorting algorithms, basic data structures, and use of python libraries. By the final days, students were applying their knowledge to solve problems and took an exam/competition to cap off their learning. The hands-on sessions were facilitated by instructors from NaijaCoder alongside UNILAG volunteers. Following the success of the 2024 program, we just finished Week 1 of NaijaCoder at the UNILAG AIRLab this summer (Summer 2025).

At NaijaCoder, we look forward to continued collaboration with UNILAG to bring computing-related curricula to classrooms across Nigeria.

Academia is Not Perfect But It Can Be Transformative


That’s my office at the University of Illinois, Urbana-Champaign. Come say hi!

Academia is not perfect. But for many of us, it remains the most accessible and reliable system we have for helping people, especially students from disadvantaged backgrounds, achieve economic, social, and intellectual mobility.

This truth came into sharp focus again as I listened to the powerful 2025 Harvard commencement address by a fellow immigrant, born in Ethiopia. You can watch it here. Like the speaker, I came to the United States as an “alien”—driven by hope, hard work, and a belief that education could change lives. And it has. I’m now a Harvard-trained professor, and I can honestly say that academia has transformed not only my life but the lives of my family and friends. Thank you academia!

But I didn’t always feel this way.

There have been moments when I was ready to walk away from it all, when the “ivory tower” felt more like a fortress of burnout, ego, and misplaced priorities than a place for learning and growth. Let me share one example. It’s not the only one, but it’s seared into my memory.

Fall 2017: My Worst Semester in Graduate School

It was my hardest semester. Many friends (mostly at Harvard and MIT) were struggling. Struggling with research, personal issues, mental health. The pressure was suffocating. Then in October, a devastating event happened. An amazing Harvard undergraduate student that I knew passed away. It shook the community. You can read about one account of that time in this Crimson article. But the article only scratches the surface. Behind closed doors, people were hurting.

The very next day, I remember writing to a colleague: “I’m done with academia.” I was angry, heartbroken, and disillusioned. How could we call this a place of learning when the well-being of students seemed like an afterthought?

At one point, I told a friend: “Even if I finish my PhD, I can’t become a professor. There’s too much blood on the hands of professors. I don’t want to be part of a system that prioritizes awards, grants, and prestige over the health and wellness of students.”

That was my truth then.

The System Is Flawed—But We’re Also Part of It

The reality is that academia is both a system and a community. And like all systems, it reflects the values of those who participate in it. Professors, students, administrators: we all carry some responsibility. Systemic change requires constant reform. It also requires courage.

I often wonder why so many brilliant people end up in industry (e.g., Google Brain), seemingly leaving academia behind. For some, it’s about opportunity. For others, it’s about survival.

But over time, my perspective has shifted. I’ve encountered professors who are deeply committed to their students’ growth and well-being. I’ve seen programs and projects that genuinely change lives. And I’ve come to realize that as long as I can stay true to my principles, and use my position to help others, I’m okay with failing by some external metric. In fact, I welcome it.

Because anyone who isn’t failing at something is probably not trying hard enough.

Reaching Out and Reaching Forward

If you’re struggling in school, you’re not alone. Maybe you’re in school to support your family. That’s valid. Maybe you’re there because you love to learn. That’s valid too. There’s no “wrong” reason to be in school, only your own journey to make sense of.

As a professor, my job isn’t just to teach. It’s to help you grow and to build a support network around you. There will be hard times. But we can face them together.

Over the past few years, I’ve found the deepest meaning in my work with NaijaCoder, an initiative aimed at empowering young people in Nigeria through technical education. Watching our alumni grow and excel has been one of the greatest joys of my life. It reminds me that education is not about titles or tenure. It’s about transformation.

A Hopeful Commitment

Yes, the system needs fixing. But I haven’t given up on academia. Not because it’s perfect, but because it’s possible to make it better. Because there are still people who care more about students than status. Because in every classroom, in every lab, in every student from Lagos to Urbana-Champaign, I see potential.

To every student reading this: Relax, and reach out. You don’t have to do it alone. And to every professor: Let’s do better. Our legacy is not in our publications, but in the people we uplift.


P.S. If you’re in a tough place right now, please know that it’s OK to ask for help. Failing isn’t the end. It’s often the beginning of something more honest, more human, and more lasting.