Norway's Deep-Sea Mining Sparks Global Debate on Environmental Costs and Green Energy

The world's oceans are facing a new threat: deep-sea mining. Norway has sparked global controversy by approving the extraction of minerals like magnesium,

Blaze Trends
DRNO1 & Deep N Beeper - Digital Camouflage [Official Video by 5D] Neurotype Cooperative #cybergrime

YouTube

Extrapolating to the #AGI world is more difficult because it is post-#TechnologicalSingularity.

Some "top AI influencer" on LinkedIn gave their two cents today, saying that even if intelligence is solved and abundant, it won't solve practically any problems because humans by and large aren't using intelligence now to solve problems.

Ok, he wasn't using a lot of intelligence in writing that, that much is given.

Anyway, once we have abundant intelligence focusing on all the hard problems, and diving deep into unexplored frontiers like nanotech (there's plenty of room at the bottom) and space, reality in effect becomes programmable.

This doesn't mean reality minus minds.

All becomes achievable, all diseases including personality disorders and old age can be cured, and the reach of humanity as an amalgam of machine and human minds expands seemingly infinitely.

Some of the smaller things achievable very fast are practically limitless sustainable energy, limitless sustainable materials and compute. This doesn't mean increased ecological footprint, it means negative ecological footprint. Sustainable superabundance doesn't mean at the exploiting the living world we have done so far, it means sustainable superabundance for non-human living ecologies as well.

When our world takes such a leap, we will certainly quickly find others in the universe who have done the same leap.

It is a misrepresentation of history to claim that during "AI winters" between 1974-1980 and 1987-1993 people didn't believe in #AI.

Even during the long quiet period of artificial neural networks before AlexNet in 2012 people who worked in software and machine learning in general understood that neural networks are the way to go, but it was simply difficult to tune shallow neural networks to perform well across different domains. It required a lot of trial and error and deep experience to make them work well, but there were numerous successes which just didn't translate into wider adoption because the appetite for paying for trial and error was limited.

Similarly, practically no one claimed there are no exoplanets before the first ones were found in the early 1990s. We knew they were there, we knew we had the technology to discover them, but there simply wasn't enough capital around to specifically focus on finding them.

When the first exoplanets were found, the research funding strangely behaved as if finding the first exoplanets changed pre-existing beliefs. Suddenly there was money for missions such as Kepler to scale up exoplanet finding.

Whose beliefs changed? Why did this have an effect on funding priorities at all?

We see the same in deep learning, generative AI and large language models. Some findings were indeed surprising, but capital markets are behaving as if we didn't know these capabilities were there to dig up before, although we did. Now there's capital to do research properly once again.

The same applies also to extrasolar life. Everyone pretty much knows there has to be non-Earth-based life out there even in our galaxy, possibly even in our solar system (e.g. microbes in the icy moon subsurface oceans). Yet we somehow behave as if we didn't know that.

Somehow our society goes through these hype cycles and winters not because of discovery and disillusionment, but because we live in two realities, what we know is true, and what the capital thinks is true.

It seems to me this is largely because the people who know aren't generally the people who decide where to allocate funds.

And at this very moment, the relevant amounts of capital are managed by people who act as if the #AGI and the #TechnologicalSingularity won't happen.

Maybe we could improve this somewhat?

Who Said It: Grimes or Her AI Clone?

The musician and mother of Elon Musk's kids created an artificial intelligence version of herself, and it's honestly hard to tell which of them said what.

Jezebel

Why do so many "old beards" from the #academia have such a difficulty grasping the meaning and repercussions of the #TechnologicalSingularity we are currently living in, and the impending #AGI?

I have thought about this a lot, because a lot of the inertia and misinformation seems to come from that source. It's not because of lack of education or lack of intelligence certainly. However, it is a known fact that highly intelligent people are often the best at deceiving themselves.

I think it's because of two things. Social circles and sunken costs to a specific branch of research.

These people who are now failing to see what is happening seem to be in very isolated social bubbles.

Even when they have networks, they are monocultural, to people who share the same opinions, or are unopinionated. You rarely see these people arguing about #AI on social media for good reasons.

They also have their public image to consider so they can't do that as much. It is also easy for them to refer to their own authority, if not in arguments, at least psychologically detaching themselves from the discourse.

There is also sunken costs to specific lines of research especially when a person is a specialist and not a generalist. These people need to always rationalize the next jaunt of research in the same line, so they need to tell everyone that is the only correct direction. In time they become convinced of it themselves as well.

If you hired a lot of #DeepLearning sceptics, they would hire a lot of deep learning sceptics in turn, and suddenly your organization is deep learning sceptical in its core nature. This poison pill is difficult to impossible to shake off.

@davebyrne, we basically have AGI tech, and pseudo-AGI deployed, but ChatGPT-4 cannot yet drive cars safely. There are lots of other cognitive tasks it cannot yet do. It loses in chess to humans still, and performs even worse in go.

We can do a bit of integration and training and get to there though.

When we get there it will be clear to all as subsequent AIs will practically all be made by AIs. Human capabilities aren't reached in an asymptotic fashion, but exceeded in a sovereign fashion faster than people can conceptualize.

We are already living the #TechnologicalSingularity.

On our route towards #AGI in this #TechnologicalSingularity trajectory we are accelerating on the most burning question is what the tomorrow looks like.

It is becoming more and more difficult to forecast shorter and shorter futures, but now we still have some visibility. What I believe will happen:
- We will experience a huge growth in appetite for #compute, specifically #chips, #data and conveniently encoded #knowledge.
- While we can introduce #AI to our existing industries, businesses and militaries, we will experience the bottleneck where we cannot quite do what we'd like to do as we have too few robots everywhere. We'll probably start building robots like there's no tomorrow, and renting them to be controlled by AIs to construct new factories, logistical centers, military bases and all sorts of things more streamlined for an AI-based society.
- Many people will cede control of their lives and any decisions within their power to AIs. Those who do might do well or less well, but definitely better and better over time as the systems become more powerful. Delegating day-to-day control to machines means that there is less reason for humans to even be informed of the day-to-day operational details.
- Some companies will get boosted by not having to hire so much labor in the first place. I think this will be a stronger dynamic than layoffs, and it will eventually causally drive layoffs as companies with more traditional structures will lose in competition to these new challengers.
- Prices of lots of things will fall, but at least initially, everything that bottlenecks further AI adoption will become scarce. Digital design becomes practically free which means our world will fill up with beauty and purpose.

https://news.ycombinator.com/item?id=35302305

"My life is better if I just do what it says."

^ We are now at this point of the #TechnologicalSingularity

I have a few conversations going. My most productive is a therapy session with C... | Hacker News