I don’t do AI research. What I do research, broadly speaking is media, and its impact on societies. So LLM and generative AI are tangents to what I work on.

From my vantage point, AI seems frightening. But what if it’s not? As it happens with all industries, the early days of many firms eventually consolidate to fewer. As this industry’s main product is this language model, a few things occurred to me. 1) The remaining firms will have lots of power to control the ultimate “voice” of AI. 1/8

2) These firms will serve more diverse permutations of human culture, behaviors, and beliefs. 3) Without “efficient institutions” — umpires (e.g, governments, regulatory bodies, agreed upon norms, and other things that underwrite societal expectations)— there is no guarantee that they will serve as many stakeholders interests. 4) But what if these institutions still exists and are able to provide guardrails? What then? 2/8

What does a future look like if an AI apocalypse is averted? Obviously, not all efficient institutions are effective. But in the grand aggregate of the world, what if on balance they are? Then what?

I think that maybe 1) the firms in this space just might train their models on the narratives that make us truly human. Think about it this way— the narratives that we tend to write— even the dystopian ones— tend to valorize traits we find pro-social. Even if the protagonist is an antihero, 3/8

there is almost always a kernel— a larger human value that we find admirable. Values such as kindness, compassion, acceptance, generosity, and forgiveness tend to be present in humanity’s philosophical works, fiction, and religious texts to entertainment media.

In other words, what if in a hundred years from now, the curriculum vitae of humanity fed into some Grand LLM results in an AI that is a reflection of what it means to be Human?

What if the guardrails the scientific and 4/8

philosophical communities are urging now are heeded?

In that future permutation, I bet AI doesn’t destroy us but reminds us of what is best about humanity. Despite the violence our fears and hatred often engenders, what if the Frankenstein’s monster that is reflected back to us shows us we can be beautiful? What if AI, presumably something we work with daily in this future time, is a constant mirror to show us how good we can all be? 5/8

And what if what we create doesn’t destroy us but helps us see each other as all victims of our common oppressors— the uncertainty of life and the knowledge that we will die without knowing for sure that we mattered.

What if AI ends up showing us that we do matter humanity can be good and beautiful? What if it actually liberates us?

That would be a beautiful day indeed. 6/8

How do we ensure we set ourselves on this path? No clue. But I’m sure of two things 1) efficient institutions (apologies to political economists— I’m taking liberties with the concept) can exists. In a way we saw this in the US for a time. To the extend that PICAN rules, the fairness doctrine, ownership limits, and so forth worked— media was able to reflect back to us a broader picture of the US allowing the work of the Civil Rights movement to march us towards justice 7/8

2) I know we all teach our children to value love, kindness, and compassion. While we often pollute those lessons by qualifying those pro social values (e.g., love this group but not that) and teach hate, I wonder if our LLMs would eventually “decide” those qualifications and hate are “noise” and the real patterns that describe humans is compassion and loving kindness.

That would be a good future indeed. So scholarly communities like #AoIR or #CHI , how do we get to a future like that? 8/8