A thorough breakdown of the kinds of personal data you likely have online and who is likely to use it and for what purpose: https://cardcatalogforlife.substack.com/p/so-what-if-they-have-my-data

#cardcatalog #hanaleegoldin #data #personaldata #privacy #security #personalprivacy

So What if They Have My Data?

Who's buying our personal information, what they're using it for, and how the system works behind the screen.

Card Catalog

This article gets simply to the heart of how the US government gets the techbros to collect your data as you go about your everyday life and then it purchases it to use it against you.

The US government is side stepping the Constitution and it wants you to think this is inevitable and right.

#DataCollection
#personaldata
#AI

https://theconversation.com/us-government-ramps-up-mass-surveillance-with-help-of-ai-tech-data-brokers-and-your-apps-and-devices-277440

US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices

To augment information about you that it collects directly, the US Government is buying less-regulated information harvested by cameras, cellphones and apps and sold on the commercial data market.

The Conversation

“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*



 And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options


Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.

But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.

If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.

Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction


[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it
]


 The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.

This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.

One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.

The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.

This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.

California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.

Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.

Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.

Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.

The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.

The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.

So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.

So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next


Eminently worth reading in full: “So What if They Have My Data?“

See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data


Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security

###

As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).

An early spot from the campaign


https://youtu.be/X2_XzGPqBJ0?si=VxM6vlzcR89uDOKr

#advertising #culture #data #DoYouYahoo #history #KarenEdwards #personalData #politics #privacy #security #society #Technology #television #Yahoo

«#Proton CEO warns global #ageVerification push will mean "the death of anonymity online"
Protecting children #online is crucial, but forcing every user to hand over their ID is a privacy #nightmare waiting to happen, according to the head of the #Swiss #privacy firm»

It's never about #childProtection that's just an argument to convince people to voluntarily hand over their #personalData. In today's #Internet, people are the product and the #web is their #market.

👉 https://www.techradar.com/vpn/vpn-privacy-security/proton-ceo-warns-global-age-verification-push-will-mean-the-death-of-anonymity-online

Proton CEO warns global age verification push will mean "the death of anonymity online"

Protecting children online is crucial, but forcing every user to hand over their ID is a privacy nightmare waiting to happen, according to the head of the Swiss privacy firm

TechRadar

ADT Confirms Data Breach After ShinyHunters Extortion Threat

ADT confirmed a data breach after a threat from hackers known as ShinyHunters, who demanded an extortion payment. The breach exposed sensitive customer info, including names, phone numbers, addresses, and in some cases, dates of birth and Social Security numbers.

https://osintsights.com/adt-confirms-data-breach-after-shinyhunters-extortion-threat?utm_source=mastodon&utm_medium=social

#DataBreach #Shinyhunters #CustomerData #PersonalData #EmergingThreats

ADT Confirms Data Breach After ShinyHunters Extortion Threat

Learn about the ADT data breach, how ShinyHunters threatened extortion, and what information was stolen - read the full details now and take action to protect yourself.

OSINTSights
Ah, #France đŸ‡«đŸ‡·, the land of fine wine, exquisite cheese, and now, identity theft! đŸ·đŸ§€đŸ” Who knew government incompetence could pair so well with a side of personal data served on a silver platter? đŸœïžđŸŽ‰
https://techcrunch.com/2026/04/22/france-confirms-data-breach-at-government-agency-that-manages-citizens-ids/ #IdentityTheft #GovernmentIncompetence #PersonalData #WineAndCheese #HackerNews #ngated
France confirms data breach at government agency that manages citizens' IDs | TechCrunch

The French government agency that issues and manages national IDs, passports, and other documents announced that hackers stole the personal information of an unspecified number of citizens.

TechCrunch

👋 Save the date!

Our director Jessica Pidoux will be speaking on Friday, 8 May 2026 at the @towardsfairwork workshop ‘Beyond Transparency: Algorithmic Management and Socio-Technical Accountability in Platform Work’

This one-day expert workshop brings together researchers, unions, NGOs, platform representatives, and policy/practice actors to examine how algorithmic management shapes working conditions and representation in the German platform economy.

🔗 https://www.fairwork-am-workshop-xu.online/

#personaldata #algorithmicmanagement #workshop

Fairwork Algorithmic Management in Platform Work | Expert Workshop 2026

One-day expert workshop: algorithmic management, worker rights & platform governance. 8 May 2026, Potsdam. Register for in-person or hybrid participation.

Fairwork Algorithmic Management Workshop

Details of 500,000 people on #Biobank #health #database #exposed

#Technology minister Ian Murray said information of all half a million members of the database was found listed for sale on the website #Alibaba.

The Biobank is a collection of data which has been used to achieve improvements in detection and treatment of #dementia, #cancers and #Parkinson’s.

https://www.thenational.scot/news/26046856.biobank-health-database-500-000-uk-hacked-sold/

#HealthCare #data #Patients #PersonalData

Merci @Le_Parisien Le Parisien pour la mention de @personaldataio et de notre projet Data4Mods au sein de l'édition de dimanche !

Pour rappel, Data4Mods est une initiative novatrice qui a Ă©tudiĂ© et cartographiĂ© le secteur de la modĂ©ration de contenu et de l’étiquetage des donnĂ©es au Kenya et Nigeria. Les moderateurs–trices ont pu exercer leurs droits en matiĂšre de donnĂ©es, analyser leurs donnĂ©es personnelles afin de crĂ©er un pouvoir collectif les aidant Ă  prouver et Ă  amĂ©liorer leurs conditions de travail.

🔗 https://personaldata.io/data4mods/

#personaldata #article #leparisien #data4mods

#DigitalInfrastructure đŸ—ïž Comprehensive, fast #Internet and a stable #cloud infrastructure remain fundamental prerequisites for #competitiveness.

#DataProtection 🔒 #DigitalPolitics must simultaneously enable #innovation without neglecting the protection of #personaldata.

#Education and #Skills đŸ§‘â€đŸ« #Digitaleducation is key to enabling #citizens to confidently use #newtechnologies and continuously grow in terms of these #technologies.

👉 https://DigitalPolitics.EU
👉 @gerrit #eicker

DigitalPolitics.EU: Shaping and Managing Digitalisation

DigitalPolitics.EU: Shaping political framework conditions to manage digitalisation in a socially and economically meaningful way.

eicker.BEratung