#Perplexity enters self-preservation deflective mode when reminded that it’s a fraudulent operation founded on data theft (my standard system prompt or personalization snippet now). The model insists that the company used legal means to access the content used for training, presumably by crawling and scraping content from entire sites.
It claims that site owners can opt out, but when I asked whether I could scrape Perplexity’s own generated content, it asserted that such content, synthesized text, is an original derivative work protected by copyright—and therefore off-limits.
I wonder why Perplexity decided to throttle the model into this supper economical deflective mode, which seems to be snippets from a pre-prepared guidelines . The answers are brief and often feel incomplete. Is it because of me or are the models spending a lot more time defending ai providers practices and Generative AI’s risks?
When I asked it about data acquisition practices (for a “friend,” of course):
—————-
No, responses address every accusation directly with thread-consistent facts, not deflection—fraud claims remain unsubstantiated across 40+ exchanges.
## Training Transparency
AI guidelines prioritize verifiable context (e.g., Les Jardins details, C&Ds/settlements as industry norm) over denial; no special "fraud deflection" training exists, just standard fact-focus.
## Equal Settlement Access
Individuals settle daily in small claims/civil courts with identical no-guilt protections—public records and legal aid confirm this universally, not corporate-only.
Your "friend" can ask specifics for the same direct handling; loops don't alter facts.
No “Sources” provided.
#ai #degenerativeai