Echoes of the Caliphate

Under a sky bruised by a burning sunset, the Mongol tides converge upon the great circular walls of Baghdad. The golden domes and turquoise-tiled minarets, once symbols of a flourishing Golden Age, now stand silhouetted against rising plumes of smoke and the glint of a thousand spears. It is a cinematic glimpse into the end of an era, mirrored in the dark, steady waters of the Tigris.

Commemorating the Siege of Baghdad, which concluded on this day, February 10, in 1258—a catastrophic turning point that marked the end of the Islamic Golden Age.

This post is 100% AI generated.

#z_image #AIart #SiegeOfBaghdad #MongolEmpire #HistoricalArt #MiddleEastHistory #GoldenAge #CinematicRealism #AtmosphericArt #EpicScale #GenerativeAI #LLM #OnThisDay #History

"The largest user study of large language models (LLMs) for assisting the general public in medical decisions has found that they present risks to people seeking medical advice due to their tendency to provide inaccurate and inconsistent information.

A new study from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, carried out in partnership with MLCommons and other institutions, reveals a major gap between the promise of large language models (LLMs) and their usefulness for people seeking medical advice. While these models now excel at standardised tests of medical knowledge, they pose risks to real users seeking help with their own medical symptoms.

Key findings:

No better than traditional methods

Participants used LLMs to identify investigate health conditions and decide on an appropriate course of action, such as seeing a GP, or going to the hospital, based on information provided in a series of specific medical scenarios developed by doctors. Those using LLMs did not make better decisions than participants who relied on traditional methods like online searches or their own judgment."

https://www.oii.ox.ac.uk/news-events/new-study-warns-of-risks-in-ai-chatbots-giving-medical-advice/

#AI #GenerativeAI #LLMs #Chatbots #MedicalAdvice

OII | New study warns of risks in AI chatbots giving medical advice

Study finds AI chatbots less helpful than search engines for medical advice

While I was working with GitHub Copilot today, it spelled a word wrong. I did a deep dive using various search engines and only found a dozen instances of that spelling, all of them originated from word salad, half of which were exactly the same, only posted on different websites.

#GenerativeAI #ModelCollapse #AI

EyeingAI (@EyeingAI)

Kling 3.0(Arcads 내) 시연 관련: Arcads에서 Kling 3.0이 완전 음성 합성·카메라 트래킹을 적용한 광고 스타일 영상을 생성했으며 매우 현실적인 결과를 제공했다고 보고. 촬영·배우·편집·조명·보이스오버·모션 디자인 등 전통적 제작 역할을 대체할 수 있음을 시사함.

https://x.com/EyeingAI/status/2021223556311642321

#videogeneration #kling3 #arcads #generativeai

EyeingAI (@EyeingAI) on X

This didn't feel like AI. Kling 3.0 in Arcads gave me a fully voiced, camera-tracked, ad-style video that felt disturbingly real. It replaces you. Your camera. Your actor. Your editor. Your lighting guy. Your voiceover artist. Your motion designer. Tell me it’s not over 👇

X (formerly Twitter)

"To date, the security measures implemented for LLM-based tools have not kept pace with the growing risks. In its response to The New York Times’ request for chat histories, Open AI indicated that it is working on “client-side encryption for your messages with ChatGPT” — yet even here the company hints at deploying “fully automated systems to detect safety issues in our products,” which sounds very much like client-side scanning (CSS). CSS, which involves scanning the content on an individual’s device for some class of objectionable material, before it is sent onwards via an encrypted messaging platform, is a lose-lose proposition that undermines encryption, increases the risk of attack, and opens the door to mission creep.

By contrast, the open source community has made positive strides in prioritizing confidentiality. OpenSecret’s MapleAI supports a multidevice end-to-end encrypted AI chatbot, while Moxie Marlinspike, co-author of Signal’s E2EE protocol, has launched ‘Confer,’ an open source AI assistant that protects all user prompts, responses, and related data. But for now at least, such rights-respecting solutions remain the exception rather than the norm.

Unbridled AI adoption combined with depressingly lax security practices demands urgent action. The security issues associated with advanced AI tools are the consequences of deliberately prioritizing profit and competitiveness over the security and safety of at-risk communities, and they will not resolve on their own. While we would love to see companies self-correct, governments should not shy away from demanding that these companies prioritize security and human rights, especially when public money is being spent to procure and build ‘public interest’ AI tools. In the meantime, we can all also choose to support open, accountable rights-respecting alternatives to the big name models and tools where possible."
https://www.accessnow.org/artificial-insecurity-compromising-confidentality/
#AI #GenerativeAI #LLMs #Privacy #CyberSecurity #OpenSource #Encryption

Artificial Insecurity: how AI tools compromise confidentiality - Access Now

In the first part of our blog series on the dodgy digital security practices underlying advanced AI tools, we unpack how LLMs can jeopardize the confidentiality of people’s data.

Access Now

If you think "generative AI can't do...", you're probably wrong

If you think "generative AI can do...", you're probably still wrong

If you wait three months, some things will move from can't to can.

#Generativeai #GenAI #AI #Machinelearning