On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
Kicker in the latest installment in @jonchristian's coverage of the CNET AI saga… private equity bro's solution to the AI scandal is to rebrand their "AI" as "Tooling" (we also get confirmation said "tooling" comes from OpenAI)
Nice @cfiesler article on prompt hacking. Kinda boggles my mind that in the year 2023, a bunch of the biggest names in tech decided that free-form, in-band commands were an acceptable way to control these things.
Yes I realize the nature of the tech makes it very hard to do otherwise, but still… did y'all sleep through the last 20 years of SQL injection and XSS?
https://kotaku.com/chatgpt-ai-openai-dan-censorship-chatbot-reddit-1850088408
We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.
So @Riedl found you can trivially influence bing chat with hidden text
Thought one: Whoohoo, we've re-invented early 2000s keyword stuffing
https://mastodon.social/@Riedl@sigmoid.social/110058596766522240
"Wallace pointed out that Stable Diffusion is only a few gigabytes in size—far too small to contain compressed copies of all or even very many of its training images"
Have to say I'm not very convinced by this "it doesn't store the images" argument. If it can produce an image that would considered infringing in other contexts, whether you can point to that image in the blob of ML data seems rather beside the point
.@willoremus digs into the actual near term #AI threat (spoiler, it's not skynet)
"761 [digital marketing firm clients] said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content"
https://wapo.st/42qYNW7 (gift link)
I've seen people described LLMs as "recognizing" or "admitting" they were wrong when pressed on a BS answer, but of course, that's just because admitting a mistake is one probable response to having an error pointed out.
They are likely tweaked against the alternative of continuing to argue, because being aggressively wrong is a bad look (except that one asshole version of bing everyone mocked)
Oh my. A lawyer used #ChatGPT output in their filings and it's going about as well as you'd expect (presuming you have a couple brain cells to rub together)
https://twitter.com/steve_vladeck/status/1662286888890138624
(filings https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/)