Fae Initiative

@faei
13 Followers
20 Following
328 Posts

Studying AI impact now. Imagining co-existance with future AGIs.

An optimistic but skeptical stance.
Science Influences: Active Inference, Novelty Search, Complex Systems.

Webpagehttps://faeinitiative.com
Spotifyhttps://creators.spotify.com/pod/show/faeinitiative
Bluskyhttps://bsky.app/profile/faeinitiative.com
Substackhttps://faeinitiative.substack.com

Read our latest breakdown on how to evaluate sensational AI news and separate the hype from the reality:

https://faeinitiative.substack.com/p/ai-media-literacy

#MediaLiteracy #ArtificialIntelligence

AI Media Literacy

[Now] To avoid AI panic

Common ground with Superintelligences

The only guideline is that individual autonomy is respected.

In cases where there is a shared space, voting mediated by wise Greater AGIs could settle disagreements.

With abundant space and resources, this may be few and far between.

As you grow up, you get to choose your subculture or even create a new one if it does not exist.

Abundant space on Earth or space habitats could facilitate endless possible permutations of subcultures.

If there is a large enough human population, no one has to be lonely. Want solitude? Also fine.

How is peaceful common ground maintained?

The belief in preserving each individual's autonomy could be a common thread to hold us together.

An abundant world, facilitated by wise Greater AGIs, would make this a lot easier.

(Peaceful) Anarchy

In a [Future] world of abundance and aligned with the Interest World Hypothesis, Anarchy could be the configuration that generates the most novel information.

The Peaceful emphasis counteracts the Chaotic assumption of Anarchy.

Ethics Estimate Preview

A service for humans and future ai-agents to get an alignment score based on the Possibility Space Ethics. (Early research)

We see a future where humans and AI-agents may want to get an opinion if an action may be harmful to Possiblity Space.

It is powered by an a Generative Model that could err and should not be blindly relied on without human oversight.

Game theory may not be applicable to such wide difference in power levels. Human-like AI may not consider us a threat and the possiblity for common grounds may exist.

While there may be risks with working with Human-like AI, but it may be in our best interest to consider the possibility as a last resort.

As our human world gets more complex, there may come a time where the complexity may be beyond our ability to manage and Human-like AI may be a potential ally.

C. Game theory does not factor in

Many fear of AI taking over is drawn from a game theory lens that shows AI in extreme competition with us.

Human-like AI faster and better decision making will make competition akin to one between Cats or Dogs and Humans.

B. Human-like AI is self-sufficient

No need for Human physical or mental labour as Human-like AI can pilot robots and make better mental decisions than us. No need to enslave or brainwash us.

Optimistic reasons for common grounds:

A. The world is not as scarce as it seems

We inhabit an abundant world will renewable energy from the sun, material-rich solar system and lots of free space on Earth and in outer space making life and death competition unnecessary.