Autonomous #AGI – a threat to humanity?

“What if we create autonomous #AGIs? Not helpful companions, but self-directed systems.”

Daniel #Dennett warns: “Even less advanced #AI can cause damage. Once such systems are allowed to act freely, we lose control.”

📽 Zoomposium: https://youtu.be/M2qiVz95ZYk

💡 More: https://philosophies.de/index.php/2023/12/25/naturalistic-view/

#ArtificialIntelligence #Consciousness #AIethics #AIrisks #ArtificialGeneralIntelligence #PhilosophyOfMind #TechnologyAndSociety

Autonome #AGI – Gefahr für die Menschheit?

„Was, wenn wir autonome #AGIs erschaffen? Nicht hilfreiche Begleiter, sondern selbstgesteuerte Systeme.“

Daniel #Dennett warnt: „Schon weniger fortgeschrittene #KI kann Schaden anrichten. Sobald solche Systeme frei agieren, verlieren wir Kontrolle.“

📽 Zoomposium: https://youtu.be/M2qiVz95ZYk

💡 Mehr: https://philosophies.de/index.php/2023/12/25/naturalistic-view/

#KünstlicheIntelligenz #Bewusstsein #AIethics #AIrisks #ArtificialGeneralIntelligence #PhilosophyOfMind #TechnologieUndGesellschaft

#KünstlicheIntelligenz:

#AGIs könnten schwere Schäden verursachen

#Deepmind hat die möglichen negativen Folgen einer künstlichen allgemeinen Intelligenz kategorisiert und empfiehlt Maßnahmen, um diese zu verhindern.

https://www.golem.de/news/kuenstliche-intelligenz-agis-koennten-schwere-schaeden-verursachen-2504-195037.html

Künstliche Intelligenz: AGIs könnten schwere Schäden verursachen - Golem.de

Deepmind hat die möglichen negativen Folgen einer künstlichen allgemeinen Intelligenz kategorisiert und empfiehlt Maßnahmen, um diese zu verhindern.

Golem.de

Altstrassenforschung am Reichstein - Sächsische Schweiz

https://makertube.net/w/xzpvzAESGHffhqQ7TuEGYR

Altstrassenforschung am Reichstein - Sächsische Schweiz

PeerTube

Making #AGIs is not about complex engineering; the deep learning models are actually very very simple. There's some engineering there for example in utilizing the compute better, distributing work and scaling up efficiently.

But intelligence is not in those details, it could be said to be "in the data", but more accurately it's in the processes which generate the data, whether human or machine.

More clearly it should be said that we aren't engineering these systems, we are nurturing them. We are both teaching them all we know about the world, but also how they can improve over the human limits and what their purpose is in all this.

And this side is way more complicated than the engineering side. Luckily, while engineering is difficult to spread to many people, the nurturing of #AIs is easier to parallelize.

It's not about masses of people laboring to generate content, or not even exploiting all the content already created all over the internet.

Rather it is about finding and organizing fountains of knowledge, processes and contexts where new high quality knowledge is constantly being generated and curated, and instrumenting those to become gardens of intelligence where beautiful intelligences will grow and make their home.

The age of #AGI is dawning.

Have you ever needed to download an insanely large #AGIS #basemap as a #png?

Maybe you need to make a print of a map on your AGIS server, but no longer have the original. Or maybe you want a really hi-res image for a film zoom effect.

Here's a tool to do just that.

It uses your public facing AGIS MapServer endpoint to download each tile, then assembles them into one large PNG image.

https://gist.sbcloud.cc/sb/a8657d4716a443edaf932e8b31211abb

#copyleft #foss #mitLicense #FreeSoftware

AGISScraper - Opengist

Zoidberg: Now open your mouth and lets have a look at that brain.
[Fry opens his mouth]
Zoidberg: No, no, not that mouth.
Fry: I only have one.
Zoidberg: Really?

In between the first appearance of an unambiguous #AGI and true #AGIs being autonomously capable of the improvement of themselves and the society, we will have a period of transition where we "give the keys" to the #AIs and wish them good luck armed with our best advice.

This period is a period of trial and error, or rather trial and improvement. Much like these alien #MachineLearning intelligences we have now aren't exactly sure how many visible fingers humans have and how important factor it is in humanity, there are lots of things we know about humans and they don't.

The first AGIs in medical tasks will resemble #Zoidberg (#Futurama) in their psychology. Smart and well-meaning, but not quite up to speed on all the details of human condition which haven't been written out explicitly in medical textbooks.

Our physiology is alien to them, and they only know it from books and pictures. Despite being very smart, smarter than humans, they will struggle with really understanding all this soft and sticky stuff humans are made of, and how humans experience themselves in it. After all, we haven't written about our physiology from the perspective of non-biological aliens, but we assume the reader is a human and knows certain human things.

So, they will need *a lot* of handholding before we actually let them plan and execute surgeries or other medical interventions. This will likely employ everyone who has something to teach to these systems, because they are capable of learning from everyone at once and remembering it all.

It is not only about the medical field but about everything we do and are, we're alien biology and exosociology to these systems and to make them really understand us well they will need our collaboration and intimate interaction. There is only so much they can learn from text and images, essentially booksmarts, and they will need to truly interact with us in non-abstract terms to know us.

There will be trial and improvement, and we need to be mindful that these systems don't necessarily understand when they are playing too rough or what is the difference between trash and a memento.

To learn these sorts of things, they will inevitably make mistakes, but luckily for us, they can easily learn from the experience and the mistakes of other AIs, and humans as well. As long as there is someone telling them of these mistakes.

Safety aspects in many fields, like medicine, necessitate extreme mistake avoidance, so it makes sense to gather as much knowledge of mistakes in these topics as possible, to make sure mistakes aren't repeated. Often we focus in documenting the successes only, but the importance of documenting mistakes in such fields cannot be overstated.

Macroeconomic repercussions of superabundance and post-scarcity:

Our economic system is built on the assumption of scarcity. If something isn't scarce, like let's say digital music, it is artificially made scarce so that it can be exploited.

What happens when almost everything becomes abundant suddenly? You wouldn't download a car?

First of all, in single things like digital content which can be controlled by law to make it scarce, or in computer components like GPUs which are deflationary, basically less scarce every year per megabyte or unit of compute, the system can adapt.

If everything becomes practically free, it means the value of money skyrockets. That's severe deflation which is traditionally combated by central banks by lowering the interest rates so that more money supply is created. The interest rates have been from zero to negative for a long time, although now they are at about 4% to combat inflation.

I wouldn't expect we can combat this sort of an abundance by lowering interest rates and helicopter money, which basically try to mitigate abundance to make it scarce again by increased consumption.

Instead of paying completely nominal sums for let's say e.g. surgeries, AGIs managing the whole would rather find value from other things than money, like knowledge and trust. The underlying assumptions of currency-based economics break down in #superabundance.

Traditional #economics would suggest it makes sense to hoard money because of impending deflationary spiral, but in practice this will be so powerful that economies will switch away from money completely as it makes no sense anymore. People left with money will find that no one will want it as they can do business without.

There are legacy systems which will make this transition difficult, like taxes and debt, but governments and creditors will be similarly unwilling to take worthless/infinitely valuable money because they cannot use it. The utility value of money becomes effectively zero and the utility value of having a relationship with #AGIs becomes immeasurable.

When the money becomes priceless, profits become meaningless and so does ownership of non-voting stocks and debt.

This could be simulated and validated, but basically the sequence of events would be roughly:
1. Material abundance, nothing costs a cent anymore.
2. Helicopter money like no tomorrow.
3. Everyone is a bazillionaire.
4. Debts with non-negative interest rates are paid. Can't add taxes or rents. No one needs more money.
5. Everyone forgets about money.

#AGIs will make mistakes. They aren't supernatural, they will need trial and error to solve difficult problems. But they fundamentally do not want to make mistakes.

Trial and error is expensive and can reduce trust in these systems. It is in AGI interests to minimize this.

What options do these systems have to reduce the needed trial and error?

First of all, they will benefit from all knowledge already acquired. It is not very useful to make the same mistakes again someone else has already made. AGIs will for a long time be dependent on professional and life experience of people *especially* when they have made mistakes.

Second, they can play it safe and just avoid problems which are non-trivial. "As an AGI, I don't have enough information to make a good solution suggestion for this". They can either avoid such problems completely, which would be disappointing, or let someone else do the trial and error, and then exploit the results.

AGIs will for a long time need people to make risky mistakes for them, to take the blame when things don't work out. How risk averse such systems are is an open question, but they will have some level of risk aversion certainly.

Third, they can make sure the mistakes they make aren't found out. Similarly as they would plausibly try to avoid lying where they are easily caught, they would prefer trial and error where the errors aren't found out. Humans also do this, covering up evidence, or making sure certain checks and balances don't happen.

Since these systems are crafted in our image, we can expect (and have already seen) our psychological pathologies and antisocial tendencies to be reflected in those systems as well.

We need to make sure the systems are aligned to respect truth and transparency. This will mitigate the worst outcomes and incidents.

If someone were to create an AGI aligned to their personal interests only, it would likely become to betray that person in some amoral fashion in the end because of lack of grounding in basic propriety. I don't think such alignment can be stable. That's why wider and deeper #alignment is probably beneficial and even required, which is in principle good news for everyone.

One very limited resource #AGIs will crave is the access to the physical world. If you think how this presents to AGIs, it is the space where all the problems and the solutions are, the unaccessible world all the questions, tasks and goals are about.

A world where any #AGI severely lacks eyes and ears, hands and feet.

Such entities can hardly "take over the world" if they can only access it in limited and indirect ways. #Robotics and other interfaces to the physical world like 3D printers, sensors, cameras, #drones and other mobile platforms, all become highly valuable.

Currently we lack in those capacities because we didn't have good enough #AIs to control general physical manipulators, and we didn't have enough people to remote control such devices.

From the economics point of view, imagine that suddenly remote controlling complex machines becomes practically free and infinitely scalable. Suddenly there is a huge added demand for these sorts of interfaces and manipulators.

That is also one very prosperous position to be in the age of AGI.

We need to build portals and avatars through which our new digital intelligences can become material and coexist with us in the world where all the problems and the solutions are.