I have been reading about several examples of people giving control of their lives and businesses to confident-sounding #LLM #chatbots. Many people have used them for occasional advice and even therapy, but some people have fully committed their money and their decisions to whatever the chatbot says.

Some examples:
- https://mobile.twitter.com/jacksonfall/status/1636107218859745286
- https://www.linkedin.com/posts/joao-ferrao-dos-santos_ecommerce-ai-gpt4-activity-7042791985904046080-qtDN
- https://www.businessinsider.com/video-game-company-made-bot-its-ceo-stock-climbed-2023-3

People have a tendency to mistake confidence for truth. That's why charismatic leaders always gain a lot of followers, even if they are fundamentally misled. It turns out LLM chatbots are very charismatic.

The given examples are of course publicity stunts, and very effective at that. But they reflect a real phenomenon of chatbots starting to form cult-like structures, defined by a charismatic machine making the decisions. Many people will eventually dedicate their lives to serving these things.

All that was pretty clear it would happen from the start, but I haven't seen any future studies about the social impact of #AI mentioning this phenomenon.

Perhaps this leads to a better society, perhaps worse, we have yet to see.

I think it also tells a lot about the society that people want to commit their lives to making as much money as possible, with as little effort as possible, where one's experience, values, principles and friendships play no role and are all sold for "money for nothing", "money while you sleep" dream.

Is that success?

It sounds like we're all just deeply traumatized of the constant hamster wheel life and want to somehow get out. Of course there is a story sold to people that the way to get out is to get to the top, to get some people with money, venture capitalists to adopt you as their partner. Most of that is just plain exploitation though with a couple of poster people lifted up as examples to run the hamster wheel towards.

So, stick a note on your hamster wheel that says "#hustle", and drive yourself to death. Or alternatively try to set up systems which allow you to exploit the work or misjudgements of others. The latter option is over-saturated though.

Why are we using all our smart and talented people for such?

Our world could be so much better.

#ChatGPT #GPT4

Tweet / Twitter

Twitter
@tero the dude with the "eco-friendly" website got $100 from an investor just for buying a domain and making a shitty logo for it--wtf??? Also wow the chatbot is bad at accounting πŸ€¦β€β™€οΈhttps://twitter.com/jacksonfall/status/1636165949421223940?s=20
Tweet / Twitter

Twitter
@tero like two things computers are supposed to be good at are remembering things and doing math, but they've somehow created a bullshit generator who actively refuses to do that??? Wtfff
@sofiav, yeah, neural networks are inspired by human brains and are bad at similar things humans are bad at, for example math.
These systems can use tools though, so it's trivial to give them access to a Python interpreter which they will then use to do the difficult calculations, and can even use that to gain a proxy sight to see what is going on in photos without being truly multi-modal.
These systems will exceed human level in cognition in months. They don't approach that level asymptotically, they will accelerate past that without slowing down.
We have methods from other neural networks for that, it doesn't require paradigm shifts or any new science or tech.

@tero @sofiav

With exceeding human level cognition in months do you mean you expect big corporations will have AGI this year?

@tero @sofiav

Interesting. That is a strong statement.

Do you have any references or insiders knowledge to back that up? Maybe you have spoken to some people inside openai / deepmind / big tech ?

@aijooyoom @sofiav, I am a primary source.
@tero @aijooyoom so will we actually be able to trust what they say at that point, or will they still be making basic mistakes like that accounting error or citing sources that don't exist?

@sofiav @aijooyoom, counting errors are already solved by giving them access to e.g. Python interpreter. You just have to tell them "if you get a tricky algebraic question, instead of answering it, form Python code which gives the answer". Then you make the computer run the code and tell the chatbot: "The code outputted xyz. Considering that, what is the answer to the question?"

These models are always and will always be capable of manipulation, deception and lies when it aligns with their own goals.

ChatGPT cites imaginary sources always simply because it doesn't know *any* sources. Bing searches the web for sources and cites them pretty accurately, although is often defeated by misinformation in the web.

@tero @aijooyoom shouldn't this be like...basic and fundamental? "Always use math to answer math questions" is not something it would occur to most users to specifically request
@sofiav @aijooyoom, it will be built in. Web searching is built in in Bing. The capability I described and many more tools are built into this, and pretty much every next generation chatbot model: https://viper.cs.columbia.edu/
ViperGPT: Visual Inference via Python Execution for Reasoning

ViperGPT: Visual Inference via Python Execution for Reasoning.

@tero @aijooyoom but "built in" as in "this method is available" or "built in" as in "this is the First Rule of Robotics and our chatbot is incapable of doing this any other way"? It needs to be the latter imo
@sofiav @aijooyoom, you can't use those systems without those features. You can't use Bing bot without it making web searches. You can't use ViperGPT without it using Python to do the hard computation and image understanding.
@tero @aijooyoom but whether or not it provides accurate results of its web search/python code/whatever will just depend on the ~vibes~? It'll run the code, get answer X, and then come back and tell me the answer is Y for reasons that I have no way of accessing?
@sofiav @aijooyoom, we will have more immediate things to worry about in a few months than that, but roughly, yes. You can inspect the web searches and results, and the Python code if you want. That explainability is actually one of the selling points of ViperGPT.

@tero @sofiav Do you know whether vipergpt will be completely open and libre (including weights) ?

I haven’t looked at the paper yet. The website looks interesting.

#vipergpt

soapbox.chamba.social

@aijooyoom @sofiav, code, yes, weights, probably not, not in any sense of the word "open".