âPrivacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.â*âŠ
⊠And now, indeed, weâre beginning to notice. Hana Lee Goldin surveys the state of playâ whoâs buying our personal information, what theyâre using it for, and how the system works behind the screenâ and considers our optionsâŠ
Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.
But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.
If weâve been online since the late 1990s or early 2000s, our data footprint can include social media accounts weâve created, online purchases weâve made, forums weâve posted in, loyalty cards weâve used, and apps weâve installed going back decades. Some of that information lives on platforms weâve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities weâve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.
Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe itâs impossible to go through daily life without having their data tracked. The unease is there. Whatâs missing is a clear picture of whatâs happening on the other side of the transactionâŠ
[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how âinferredâ data is used to augment the âhardâ data. Itâs chilling. She then puts the issue into context, and discusses we we canâ and cannotâ do about itâŠ]
⊠The philosopher Helen Nissenbaum has a framework for whatâs happening here: contextual integrity. The idea is that privacy isnât about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search wonât follow us into an insurance application. In the current data economy, thatâs exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.
This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, weâre also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we havenât developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think weâre consenting to and what weâve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.
One of the reasons the âso whatâ question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platformâs terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.
The architecture of digital consent treats data sharing as a binary: agree to the terms or donât use the product. Thereâs rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the âchoiceâ to share data often functions as a condition of entry into daily life rather than an informed negotiation. Weâre not handing over data because weâve weighed the tradeoff and decided itâs fair. Weâre handing it over because the alternative is exclusion from services we rely on.
This is the structural context behind the Pew Research Center finding that more than half of Americans believe itâs impossible to go through daily life without being tracked. For many of us, it isnât possible, at least not without significant inconvenience or sacrifice. The question isnât whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints weâre operating in, and whether the system can be pushed â through regulation, through market pressure, through better tools â toward something more transparent.
Californiaâs Delete Act, which took effect in January 2026, is the strongest example of whatâs emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Unionâs GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: whatâs available to a California or EU resident may not extend to someone in a state without comparable legislation.
Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These canât erase the data trail entirely, but they can narrow whatâs actively available for sale.
Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that donât need those permissions to function. We can treat âsign in with Googleâ and âsign in with Facebookâ buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of âwe may share your information with our partners,â where âpartnersâ just means anyone willing to pay.
Most of us donât read privacy policies, and the policies arenât built to be read. They average thousands of words of dense legal language filled with terms like âlegitimate interest,â âdata processor,â and âde-identified data.â Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, âaccept allâ buttons positioned prominently while âcustomize settingsâ options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.
The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We donât connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.
The same critical thinking weâve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: whoâs collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, weâre the product being evaluated, and the questions are being asked about us rather than by us.
So can we get it back? Not entirely. Data thatâs already been collected, copied, sold, and processed across multiple systems canât be fully recalled. What we can do is reduce whatâs actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didnât exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.
So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices weâre charged, the credit weâre offered, the jobs weâre considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We canât close the file entirely, but we can make more informed decisions about what goes into it nextâŠ
Eminently worth reading in full: âSo What if They Have My Data?â
See also: âWhy Do We Care So Much About Privacy?â (source of the image above) in which Louis Menand suggests that our concern should be with the âweaponizationâ of dataâŠ
* Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security
###
As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-weâve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked âDo You Yahoo?â advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).
An early spot from the campaignâŠ
https://youtu.be/X2_XzGPqBJ0?si=VxM6vlzcR89uDOKr
#advertising #culture #data #DoYouYahoo #history #KarenEdwards #personalData #politics #privacy #security #society #Technology #television #Yahoo