@marcosala

4 Followers
6 Following
25 Posts

European alternatives for digital products.

https://european-alternatives.eu/

European Alternatives is a project by Constantin Graf, which collects and analyzes European alternatives to digital services and products, such as cloud services and SaaS products.

We regularly receive advice and suggestions from European Alternatives users, so feel free to reach out!
Interesting resource. This may be of interest.

#DigitalRights #european #DigitalProducts #saas #hosting #adobe #ConstantinGraf #OpenSource

European Alternatives

We help you find European alternatives for digital service and products, like cloud services and SaaS products.

European Alternatives
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Ahead of OpenAI CEO <a href="/technology/ousting-ceo-sam-altman-chatgpt-loses-its-best-fundraiser-2023-11-18/">Sam Altman’s four days in exile</a>, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

Reuters
"What’s new is that with Generative AI in general, and Large Language Models in particular, we’ve discovered something really important – that sufficiently detailed syntactic analysis can approximate semantics. LLMs are exquisitely sophisticated linguistic engines, and will have many, many valuable applications – hopefully mostly positive – that will improve human productivity, creativity, and science."
Jerry Kaplan https://unchartedterritories.tomaspueyo.com/p/openai-and-the-biggest-threat-in/comment/44065995
Comment by Jerry Kaplan on Uncharted Territories

Tomas and friends: I usually avoid shooting off my mouth on social media, but I’m a BIG fan of Tomas and this is one of the first times I think he’s way off base. Everyone needs to take a breath. The AI apocalypse isn’t nigh! Who am I? I’ve watched this movie from the beginning, not to mention participated in it. I got my PhD in AI in 1979 specializing in NLP, worked at Stanford then co-founded four Silicon Valley startups, two of which went public, in a 35-year career as a tech entrepreneur. I’ve invented several technologies, some involving AI, that you are likely using regularly if not everyday. I’ve published three award-winning or best-selling books, two on AI. Currently I teach “Social and Economic Impact of AI” in Stanford’s Computer Science Dept. (FYI Tomas’ analysis of the effects of automation – which is what AI really is – is hands down the best I’ve ever seen, and I am assigning it as reading in my course.) May I add an even more shameless self-promotional note? My latest book, “Generative Artificial Intelligence: What Everyone Needs to Know” will be published by Oxford University Press in Feb, and is available for pre-order on Amazon: https://www.amazon.com/Generative-Artificial-Intelligence-Everyone-KnowRG/dp/0197773540. (If it’s not appropriate to post this link here, please let me know and I’ll be happy to remove.) The concern about FOOM is way overblown. It has a long and undistinguished history in AI, the media, and (understandably so) in entertainment – which unfortunately Tomas sites in this post. The root of this is a mystical, techno-religious idea that we are, as Elon Musk erroneously put it, “summoning the beast”. Every time there is an advance in AI, this school of thought (superintelligence, singularity, transhumanism, etc.) raises it head and gets way much more attention than it deserves. For a bit dated, but great deep-dive on this check out the religious-studies scholar Robert Geraci’s book “Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality”. AI is automation, pure and simple. It’s a tool that people can and will use to pursue their own goals, “good” or “bad”. It’s not going to suddenly wake up, realize it’s being exploited, inexplicably grow its own goals, take over the world, possibly wipe out humanity. We don’t need to worry about it drinking all our fine wine and marrying our children. These anthropomorphic fears are fantasy. It is based on a misunderstanding of “intelligence” (that it’s linear and unbounded), that “self-improvement” can runaway (as opposed to being asymptotic), that we’re dumb enough to build unsafe systems and hook them up to the means to cause a lot of damage (which, arguably, describes current self-driving cars). As someone who has built numerous tech products, I can assure you that it would take a Manhattan Project to build an AI system that can wipe out humanity, and I doubt it could succeed. Even so, we would have plenty of warning and numerous ways to mitigate the risks. This is not to say that we can’t build dangerous tools, and I support sane efforts to monitor and regulate how and when AI is used, but the rest is pure fantasy. “They” are not coming for “us”, because there is no “they”. If AI does a lot of damage, that's on us, not "them". There’s a ton to say about this, but just to pick one detail from the post, the idea that an AI system will somehow override it’s assigned goals is illogical. It would have to be designed to do this (not impossible…but if so that’s the assigned goal). There are much greater real threats to worry about. For instance, that someone will use gene-splicing tech to make a highly lethal virus that runs rampant before we can stop it. Nuclear catastrophe. Climate change. All these things are verifiable risks, not a series of hypotheticals and hand-waving piled on top of each other. Tomas could write just as credible a post on aliens landing. What’s new is that with Generative AI in general, and Large Language Models in particular, we’ve discovered something really important – that sufficiently detailed syntactic analysis can approximate semantics. LLMs are exquisitely sophisticated linguistic engines, and will have many, many valuable applications – hopefully mostly positive – that will improve human productivity, creativity, and science. It’s not “AGI” in the sense used in this post, and there’s a lot of reasonable analysis that it’s not on the path to this sort of superintelligence (see Gary Marcus here on Substack, for instance). The recent upheaval at OpenAI isn’t some sort of struggle between evil corporations against righteous superheroes. It’s a predictable (and predicted!) consequence of poorly architected corporate governance, and inexperienced management and directors. I’ve had plenty of run-ins with Microsoft, but they aren’t going to unleash dangerous and liability-inducing products onto a hapless, innocent world. They are far better stewards of this technology than many nations. I expect this awkward kerfuffle to blow over quickly, especially because the key players aren't going anywhere, their just changing cubicles. Focusing on AI as an existential threat risks drowning out the things we really need to pay attention to, like accelerating disinformation, algorithmic bias, so-called prompt hacking, etc. Unfortunately, it’s a lot easer to get attention screaming about the end of the world than calmly explaining that like every new technology, there are risks and benefits. It’s great that we’re talking about all this, but for God sake please calm down! 😉

Uncharted Territories
”Nulla in questo universo è il nome che porta”
https://www.recurse.com/social-rules
"The social rules are:
No well-actually’s
No feigned surprise
No backseat driving
No subtle -isms"
Social rules - Recurse Center

The RC social rules help create a friendly, intellectual environment where you can spend your energy on programming.

Recurse Center

La Commissione europea "scopre" i vantaggi del #SoftwareLibero per la #sicurezza: ai dipendenti consiglia #Signal invece di #WhatsApp e altri programmi proprietari.
https://www.lastampa.it/tecnologia/news/2020/02/26/news/bruxelles-silura-facebook-e-raccomanda-ai-dipendenti-di-non-usare-whatsapp-e-messenger-1.38520481

Lo diranno anche ai cittadini? A quando #LibreOffice?

Bruxelles silura Facebook e raccomanda ai dipendenti di non usare Whatsapp e Messenger - La Stampa

Le istruzioni della Commissione lodano le caratteristiche di Signal in termini di sicurezza, confidenzialità e privacy, dovute al suo efficace sistema di crittografia (encryption) open source

The Bitter Lesson
http://incompleteideas.net/IncIdeas/BitterLesson.html
"The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning."
Rich Sutton
The Bitter Lesson

2/2 [...] Esso rigetta un'intera categoria di teorie (essenzialmente) classiche senza neppure dover menzionare la meccanica quantistica. E accade che i risultati sperimentali non solo escludono l'intera classe di teorie locali e deterministiche ma anche che confermano le previsioni della meccanica quantistica. Abner Shimony ha appropriatamente denominato "metafisica sperimentale" questo tipo di radicale soluzione empirica a quello che sembra un problema metafisico"
James T. Cushing
1/2 "Bell non ha mai elaborato alcuna teoria locale e deterministica. Ma, senza mai entrare nei dettagli dinamici, egli ha dimostrato che, in linea di principio, nessuna teoria siffatta può esistere. L'intera classe venne cancellata in un solo colpo: un tipico teorema di impossibilità. Inoltre il teorema di Bell non dipende in alcun modo dalla meccanica quantistica. [...]