While we recently called on #Google to release a #Gemini safety report, #OpenAI has launched GPT‑4.1 without a similar safety review. Are major #AI players now focusing solely on performance instead of user protection? #AISafety #AIEthics #ChatGPT #GPT41

OpenAI ships GPT-4.1 without a...
Bluesky

Bluesky Social
OpenAI pushes boundaries with the debut of ChatGPT-4.1, but the lack of a safety report raises eyebrows. Is progress worth potential risk? Let's discuss the balance between AI advancement and safety. #OpenAI #ChatGPT #AISafety #TransparencyInTech
https://www.squaredtech.co/chatgpt-4-1-launches-without-safety-report?fsp_sid=2027
OpenAI’s ChatGPT-4.1 Launches Without Safety Report

OpenAI's ChatGPT-4.1 debuts with enhanced capabilities but omits a safety report, raising concerns about transparency and AI safety practices.​

SquaredTech

US Marines tested generative AI, like ChatGPT, for battlefield intel analysis. This Pentagon push sparks debate on AI reliability in high-stakes military ops.

#GenAI #MilitaryTech #AISafety

"OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards.

Staff and third-party groups have recently been given just days to conduct “evaluations”, the term given to tests for assessing models’ risks and performance, on OpenAI’s latest large language models, compared to several months previously.

According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as the $300bn start-up comes under pressure to release new models quickly and retain its competitive edge."

https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8

#AI #GenerativeAI #OpenAI #AISafety #ResponsibleAI

OpenAI slashes AI model safety testing time

Testers have raised concerns that its technology is being rushed out without sufficient safeguards

Financial Times

#OpenAI #AI #TrainingAI #AISafety

"OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models... Staff and third-party groups have recently been given just days to conduct 'evaluations', the term given to tests for assessing models’ risks and performance, on OpenAI’s latest large language models, compared to several months previously."

https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8

OpenAI slashes AI model safety testing time

Testers have raised concerns that its technology is being rushed out without sufficient safeguards

Financial Times

Financial Times: OpenAI slashes AI model safety testing time. “OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards. Staff and third-party groups have recently been given just days to conduct ‘evaluations’, the term given to tests for […]

https://rbfirehose.com/2025/04/11/financial-times-openai-slashes-ai-model-safety-testing-time/

OpenAI has significantly reduced safety testing time for new AI models like o3, raising alarms among testers about rushing powerful tech due to competitive pressures

#AI #GenAI #OpenAI #AISafety #LLMs #AIEthics #AIModels #MachineLearning #DeepLearning

https://winbuzzer.com/2025/04/11/openai-cuts-ai-safety-testing-time-sparking-concerns-amid-model-launch-rush-xcxwbn/