The Role of Humans in the Era of AI
A few months ago OpenAI released their o3 and o4-mini models, prompting some to declare the arrival of “Artificial General Intelligence” (AGI). Meanwhile, a steady contingent—often humanities scholars—remains largely dismissive of AI, viewing it as a gimmick, typically for noble, student-related reasons.
I’m certainly not prepared to say we’ve reached AGI, in part
because AGI is a notoriously (and perhaps inherently) nebulous concept, but
mostly because it doesn’t matter. Contra the skeptics, I’m convinced the models
will continue improving, for good or ill, and the role of humans in this
new world will become harder to define.
I mean the role of humans in intellectual work: the
production of knowledge, the discovery of truth, and the creation of beauty.
Barring an AI
apocalypse, I assume that humans will continue to play some role in the
world regardless of how advanced AI becomes. We will continue to cook and eat,
walk around our neighborhoods, get haircuts and take showers, reflect on topics
of interest, and hopefully love and be loved. In other words, we will continue
to exist, putting one foot in front of the other as we always have, day after
day and year after year.
So, again, the question is about intellectual or mental work.
With AI models at human and even superhuman levels of intelligence, what can
humans hope to contribute to science, literature, and art? Won’t a vastly more
powerful “thinking machine” be able to reason more precisely, write more
compellingly, and generate images more beautifully?
AI and writing
I think the inevitable answer is “yes,” at least eventually,
and in many ways they already do. Take writing, the area I am most competent to
speak to. I have spent a significant portion of my professional life editing
texts, ranging from short blog posts to academic essays to book-length manuscripts.
I have scrutinized millions of words (and written hundreds of thousands myself),
and I’m pretty good at it. But the capacities of Large Language Models
(LLMs)—OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, etc.—are
astonishing.
First of all, they can straighten out even the most confused
and malformed sentences, providing a version that at least reads well, even if
the underlying idea remains vapid or obscure. They aren’t perfect—in my
experience, even the best models make arbitrary suggestions, pointlessly
substituting synonyms and things of that nature—but they are essentially
grammatically infallible. They will simply not produce a sentence that doesn’t
parse.
And this leads to a second point: they are good writers,
much better than most humans. The ideas their writing communicates may or may
not be valuable (this in part depends on the quality of the prompt, for now the
province of humans), but they will be expressed with crisp and precise prose
that exceeds the capabilities of most people. AI writing often comes across
as “lifeless,” but as a simple matter of
formal structure and clarity of expression, few humans write more lucidly than
a top‑tier LLM. The best can write more creatively, arrestingly, and
beautifully, but at least on some basic, functional level, the writing isn’t
“better.”
Given my textual background, perhaps I am liable to be
overly impressed by LLMs. Just because they can write well doesn’t mean they
will inevitably outperform humans on all cognitive tasks. But thinking well and
writing well are intertwined, and AIs
continue to improve on benchmarks designed to test other intellectual
capabilities.
So, once more: where does this leave humans? What should we
spend our time doing, now that AIs can do so much of the thinking that used to
distinguish us?
Humans as handmaids to AI
Assuming the intellectual powers of AIs continue to grow, human
contributions to intellectual work will be increasingly supplemental. We are
collaboration partners with AIs, and in some ways we will be more like research
assistants. For this reason, now and especially moving forward, we should write for the AIs. They are
core recipients of our efforts, and arguably the recipients that matter most.
One of the main goals of humanity should be to continue
gathering information and amassing knowledge to feed into AI systems, which
depend on the input of novel and up-to-date data and analysis. The most
valuable mental work in the world will be gathering information and generating
ideas for AI systems. These systems have reached their current level by
digesting millennia of human thinking, research, writing, and data collection,
and they still need us (certainly for the time being)
to reach ever greater levels of knowledge and understanding. What we want above
all else is truth, knowledge, and wisdom; that is the point of millennia of
human thinking, research, writing, and data collection.
Reduced to its essentials, we might think of humanity’s task
as twofold: to gather as much information and data as we can, and to produce as
much creative and original content as possible. All of this should be (and
probably naturally will be, whether you want it to be or not)
funneled into AI models as part of ever-larger training sets. For as advanced
as the systems currently are, there is still plenty of work to be done, at
least for the foreseeable future. (After that, and with a bit more progress, we
may face the challenge of deep
utopia.)
AIs can now help us unravel and read hitherto inaccessible
ancient scrolls, but we still need people to interpret them, drawing on
their unique training and knowledge. It is still immensely valuable when a
journalist writes a long-form story exploring a hidden truth or a forgotten
community, or even just gathers raw data for a run-of-the-mill news report, particularly
when this work necessitates (as it almost always does) interfacing with the
physical world. But even the latter isn’t necessary. There is still every
reason to dream up a new world in a novel, or write a short story or poem. Besides being intrinsically valuable, such creations are original inputs, and AIs crave original inputs as desiccated plants
crave water.
Purely analytical or theoretical work, which AIs can execute
at near-expert levels in some domains, should continue unabated as well. We
should keep thinking about and trying to make progress on philosophical and
moral questions, for example. But this process should be dialectical, with
humans seeking input from AIs as they work through arguments, narrate new
stories, and all the rest. The goal is ever greater knowledge and discernment
of truth, and it would be bizarre and parochial to insist that this must come
from a human mind. (“Accept the truth from whatever source it comes,” as Maimonides
says.) Also, consider that an overwhelming percentage of humanity has held and
still holds that truth can come from at least one non-human source: God.
Rage Against the Machine
A major worry with the agenda I have outlined is that it is
unsustainable because all of this intellectual labor is uncompensated. A person
(or institution) spends time and resources to conduct a study, produce a
report, or write a news story, and then it is merely absorbed into the AI leviathan,
which in many future cases will be the only entity that wrings any value out of
the labor. If a document serves a utilitarian function (as opposed to a
literary one), we won’t care about reading the original document; what we want
is the insight contained in the document, integrated into countless other
insights by an intelligence edging toward omniscience.
This is a serious concern, but solutions are at least
conceivable, even if they leave many unsatisfied. It’s possible that licensing deals
or other forms of remuneration could be negotiated to compensate the (for now)
front-line producers of content and data. In fact, I think such deals are
necessary, as intellectual property concerns are one of the most serious
threats to the continual power and utility of AI models. If you want a
maximally helpful AI (and one that is worth paying for), it needs access to
up-to-date information and near-infinite seas of data, both of which cost money,
often a
lot of money. But with the right arrangements, a mutually beneficial
equilibrium could emerge: humans (and their institutions) continue to be fairly
compensated for producing data for AI systems, while AI systems generate
revenue by producing valuable outputs (precisely because they utilize
high-quality information).
Is such a world demoralizing for writers, reporters, and
other “content producers”? Is there something dehumanizing about writing for a
machine, as opposed to for other humans? Isn’t one of the goals of writing to
communicate your ideas to others?
Generally, yes, although it is worth keeping in mind that in
many forms of writing, it’s not entirely clear if communication is a central
goal. If conveying ideas were the goal of academic writing, for example, scholars
would write better, and often on topics of greater interest. And writing for
the AIs doesn’t mean your ideas won’t be communicated to others, and in fact
they may reach far more people than you could ever hope to reach. When you
write for the machines, you are in a sense trying to guide the trajectory of
future thinking, or attempting to shape humanity’s collective understanding, as
instantiated in AI systems. So, you aren’t giving up on communicating; you are
just communicating in a different, impersonal way.
Perhaps you resist all of this, and you want things to
remain as they were. You don’t want to engage in a dialectical reasoning
process with AIs; you want the ideas to spring from your own mind. And perhaps
it is not enough to contribute in your small way to humanity’s collective
understanding; you want to be the object of attention, the person who readers
follow and admire.
This is an understandable (if a bit egotistical) attitude,
but I don’t think this minor rebellion against the flow of history will mean
very much. The reason for this is that the systems are here and only getting
better. Even if progress were to halt immediately, the most important thinkers
and creators will be those who master and shape AI to the greatest possible
extent. There is no alternative but to write for the machines, and it’s not
even clear how long these efforts will be additive. Keep an eye on synthetic
data.
Self-cultivation in the Era of AI
What role does traditional labor and education play in this
world? How much time should you continue to spend, say, reading novels or
narrative nonfiction or poetry? Even more concretely, what should your average
day look like? How much time should you spend pursuing traditional education,
like reading a book or watching a documentary, and how much time should you
spend interacting with AIs?
I do not have answers to these questions. I suspect they
will work themselves out, and in different ways for different people. But I do
think all people should strive to be singular. Everyone is unique, but some are
more unique than others. Each person is an amalgamation of their education and
experiences, and these can be deep or shallow, enriching or impoverishing. LLMs
have already read the sum of human text, and any given person can only hope to
process the minutest fraction of this corpus. What humans can do instead is
sketch a singular research agenda and lead a singular life, the weirder the
better, hoping to draw novel insights from an eclectic combination of inputs.
You should lean into your miscellaneous interests.
If you are an academic, for example, instead of spending
each morning systematically reading PDFs of journal articles to advance your
research agenda (like a machine), perhaps it is better to spend it—in addition,
of course, to directly interrogating LLMs about your research—reading The
Magic Mountain and watching YouTube videos about Japanese art,
interspersed with free writing in your journal, which later becomes fodder for
an essay you write on your blog that may be seen by few if any people (the likely fate of this very essay), but
which more importantly will be read by AIs. And for the writing to matter, to
other people or AIs, it will need to reflect a singular viewpoint that is
almost indiscriminate in its interests.
The economist Tyler Cowen, one of the people now claiming
that we have plausibly reached AGI (see link above), predicts
that “pretty soon quality AI programs will write better columns than most of
what is considered acceptable at top mainstream media outlets. Of course those
columns will not be by human beings, and so those writings will not be able to
contextualize themselves within the framework of what a particular individual
thinks or feels. That kind of context will be all-important, as impersonal
content, based on broadly available public information, will be outcompeted by
the machines.”
I think this is correct, although with perhaps too much
emphasis on the continuing importance of individuals. The best way forward may
be to largely forget ourselves, recognizing that while our ideas may (for the
time being) be important, their human source is not.
Skepticism toward the importance of the self is not a new
idea. Buddhists have long recognized the self is a source of suffering. What we
should ultimately strive for is the experience of anatta—“no-self.” This
is a particularly radical articulation of the view, but it is more or less
present in other traditions as well. Christian mystics like Meister Eckhart regarded
the ego as a barrier to union with God, counseling Gelassenheit (“detachment”),
and Sufi Muslims believe that fanāʾ
(“self-dissolution”) is essential for spiritual realization. Suppressing your
own identity to write for the machines has religious warrant as well.
How deep of me to make so many interesting connections. But every one of the references, and far more, were pulled up with a simple prompt to ChatGPT requesting instances of “ego dissolution” and “self-forgetfulness” in various religious traditions. I have a PhD in religious studies, and to be fair this was marginally relevant to my prompt, since I was specifically looking for information about Buddhism and Meister Eckhart (although I mentioned neither). Nevertheless, in seconds it gave me far wider and deeper information than I could have come up with after a day of research. This wounds my ego, but fortunately my ego doesn’t matter.
Comments
Post a Comment