The creator of an open source genetic database is shutting it down and deleting all the data.
It turns out the largest use case for DTC genetic data was not biomedical research or research in big pharma, it was law enforcement.
Given the rise of authoritarian governments, the project is now dead.
When Your Threat Model Is Being a Moron
https://www.404media.co/when-your-threat-model-is-being-a-moron-signal/
Following on from LinkedIn trying to auto opt everyone in to allowing them to use your data to train their AI models, following discussions with the ICO this has been 'suspended' https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2024/09/our-statement-on-changes-to-linkedin-ai-data-policy/
Which more than likely means that we'll have to go through the same process of turning it off again at some point in the future, only this time they might actually tell us they're planning on doing it first.
"We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.
What can we do to defend ourselves against threats like the Israeli pager supply chain attack?
Well, for starters, if your threat model includes state actors using shell companies to sell you custom made exploding electronics in bulk, you should stop getting your security advice from social media.
The language used by Google in the blog post explaining their decision to distrust Entrust root certificates from the 1st November is absolutely brutal:
"Over the past several years, publicly disclosed incident reports highlighted a pattern of concerning behaviors by Entrust that fall short of the above expectations, and has eroded confidence in their competence, reliability, and integrity as a publicly-trusted CA Owner."
"Over the past six years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the Internet ecosystem, it is our opinion that Chrome’s continued trust in Entrust is no longer justified." https://security.googleblog.com/2024/06/sustaining-digital-certificate-security.html
Two great articles on Large Language Models and hallucinations, the first from MIT Technology Review asking why LLMs hallucinate and the second a paper from “Ethics and Information Technology” looking into if there’s a better word to describe hallucinations.
I really like both articles – the takeaway from the first “It’s all hallucination, but we only call it that when we notice it’s wrong. The problem is, large language models are so good at what they do that what they make up looks right most of the time. And that makes trusting them hard.” Is expanded on greatly by the work of three philosophers from the University of Glasgow in “ChatGPT is Bullshit” based on the seminal work by moral philosopher Harry Frankfurt in his book “On Bullshit”
In it they claim that a computer program can’t be concerned with truth, but is designed to produce output that looks like it’s true and as such LLMs aren’t really lying or hallucinating (as these acts require some concern regarding the truth of a statement) but in fact bullshitting – and that if this term is used to describe the output people are more likely to check the output rather than blindly accepting it, as the article says “Like the human bullshitter, some of the outputs will likely be true, while others not. And as with the human bullshitter, we should be wary of relying upon any of these outputs.”
https://link.springer.com/article/10.1007/s10676-024-09775-5
https://www.technologyreview.com/2024/06/18/1093440/what-causes-ai-hallucinate-chatbots/

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.