Anyone interested in a wild ride?

Here is the CEO of Gab attempting to create moral panic over the fact that #ChatGPT is programmed/trained to avoid sensitive topics. Of course this must mean that the "right" is being canceled by AI and must fight back.

"Christians Must Enter the AI Arms Race"

1/2

Still reading?

2/2

As an aside, I've noticed a concerning trend of some prominent figures in Silicon Valley adopting right-wing rhetorical strategies and have started talking about "AI Safety" as cancelation.

3/2

@Riedl This, this is my biggest concern. There will always be right-wing trolls like said Gab CEO, and while we absolutely cannot ignore them—we ignored them for the better part of the 2010s and are now reaping the consequences of that inaction—it’s folks like certain high-level Meta or Twitter or OpenAI executives spouting these same talking points that will do far more and lasting damage than any of our collective [in]action could ever do.

@magsol @Riedl they’ve already infiltrated tech— who at apple hired Antonio Martinez in the first place? Elon Musk has definitely empowered this wing of tech bro’s and they’re making incremental gains

Seems inevitable though that in about 3 years when these right wing nuts have figured out AI, that the internet will be filled with bots spewing this garbage, And likely there will need to be “counter bots”

I wonder if the internet will just become unusable for a period of time…?

@Riedl sad because already “AI safety” is problematic itself

@Riedl Many things were totally unexpected from the AI advances of late, but "Extremists seizing AI tech to promote their views" is the most expected event.

Actually, while I find questionnable the way OpenAI tries to solve alignment, it comes 100% from expecting this kind of people.

They remember Tay.

In the end, companies like Gab are going to have their LLMs. OpenAI hacks to prevent some prompts just delays them by 1-2 years.

@ktp_programming authoritarians will be drawn to tools that reinforce power structures. So far AI is far easier to use to reinforce (dare I say “conserve”?) existing power structures than it is in breaking down traditional barriers of access and opportunities.
@ktp_programming the political Right doesn’t need to build an alternative to ChatGPT because GPT3 will do what they want—ChatGPT is just a variation. All they really need is to continue to make inroads into tech elite circles, something well underway.

@Riedl I must say that I don't see obvious ways to break down traditional barriers (though translation does lower down the language barrier by a lot) but I don't see obvious ways to reinforce them either.

Propaganda will be easier to generate but also easier to flag and filter out.

@Riedl @ktp_programming Even more logical are the smart authoritarians creating tools designed to maintain control.

@Riedl not sure if you get notifications when i manually quote tweet so I am also manually notifying you: https://hci.social/@chrisamaphone/109829893098987011

(if this is not a desirable form of interaction for you though please let me know and I will delete it!)

chris martens (they/them) (@[email protected])

this is an instance of a more general pattern i see in political rhetoric around tech research: - 1: leftists raise concerns about sociotechnical systems - 2: centrists/liberals get funding for a watered-down, ethics-washing approach that purports to address the above but really doesn’t - 3: rightists engage in reactionary rhetoric mocking 2 - news coverage frames “balanced coverage” as representing 2 and 3 but totally ignores 1. https://sigmoid.social/@Riedl/109763451846720428

🌱 hci.social
@chrisamaphone I think I don't get notified for this. Thanks for the heads-up. I don't have any concerns.
@Riedl Yikes. A lot of people do seem to struggle with the delta between 'erosion of privilege' and 'being discriminated against'. They're not the same and we really should be alive to people conflating them.

@Riedl perhaps I’m being naive, but the last panel captures why I’m not terribly concerned about this specifically. are the best applicants going to apply to this post? no, not only due to moral concerns (some will object but others will turn a blind eye), but because it won’t be terribly lucrative. like gab itself, it’s bad business and will be relegated to a tiny audience. no one wants gab RSUs.

I’d be worried if these training and modeling changes are forced by the state.