Ethics of psychographic targeting in advocacy — using values, attitudes, and behaviors to micro-target messages. Will examine risks of manipulation, transparency gaps, and consent issues, with cases like Cambridge Analytica & Meta’s FTC decree.
Help I still need: Best sector focus; finding post–Cambridge Analytica examples; framing enforceable solutions. AIA HAb SeCeNc Hin R ChatGPT 4o v1.0 @jjsylvia

@JLally7 Can you clarify what "in advocacy" means in this? Are you talking about a nonprofit?

Also, I would say that you need to make sure you've narrowed this enough -- you've mentioned multiple cases, but remember this is meant to be just a single case study. It sounds like you want a single case like Cambridge but that happened later? Something like this? https://techcrunch.com/2024/12/13/controversial-eu-ad-campaign-on-x-broke-blocs-own-privacy-rules/

Controversial EU ad campaign on X broke bloc's own privacy rules | TechCrunch

The European Union's executive body is facing an embarrassing privacy scandal after it was confirmed on Friday that a Commission ad campaign on X

TechCrunch

@jjsylvia Focusing my case study on California Proposition 22 (2020) — examining the ethics of psychographic microtargeting in ballot measure campaigns, and whether stricter disclosure rules should apply compared to commercial marketing.

Q1: Should campaigns be banned from using voter personality and values data without explicit consent?
Q2: Should billion-dollar corporate-funded campaigns face stricter ad rules than grassroots efforts? AIA HAb SeCeNc Hin R ChatGPT 4o v1.0

@JLally7 This looks good -- much more narrowly targeted. Good work! Looking forward to seeing the full case write-up