| Web | https://heiko.vogelgesang.berlin |
| Web | https://heiko.vogelgesang.berlin |
Erst hat #Instagram jahrelang verboten Links in den eigenen Beitrรคgen zu setzen, jetzt will man Geld dafรผr wenn man Links setzt. Im INTERNET! Wie unfair und schรคbig kann man sein?
https://onlinemarketing.de/social-media-marketing/instagram-links-in-captions-test
Friedrich Merz: "Forschung ist kein Selbstzweck. Forschung muss zu Wertschรถpfung, Produktion und Innovation in Deutschland und in Europa fรผhren."
Folgte staatliche Fรถrderung konsequent seiner Maxime, gรคbe es all das NICHT:
- Quantenmechanik
- Relativitรคtstheorie
- CRISPR/Cas9
- Penicillin
- Internet
- Kenntnis der DNA-StruktuR
- Atomuhren
etc. pp.
Der Mann ist eine Zumutung fรผr jeden vernunftbegabten Menschen. ๐คฆ๐ปโโ๏ธ
Be an asshole to prompt successfully
"Contrary to expectations, impolite prompts consistently outperformed polite ones, with accuracy ranging from 80.8% for Very Polite prompts to 84.8% for Very Rude prompts."
This is not a new finding, but it is finally a measured one. However, the database was based on *only* 250 prompts. https://arxiv.org/abs/2510.04950

The wording of natural language prompts has been shown to influence the performance of large language models (LLMs), yet the role of politeness and tone remains underexplored. In this study, we investigate how varying levels of prompt politeness affect model accuracy on multiple-choice questions. We created a dataset of 50 base questions spanning mathematics, science, and history, each rewritten into five tone variants: Very Polite, Polite, Neutral, Rude, and Very Rude, yielding 250 unique prompts. Using ChatGPT 4o, we evaluated responses across these conditions and applied paired sample t-tests to assess statistical significance. Contrary to expectations, impolite prompts consistently outperformed polite ones, with accuracy ranging from 80.8% for Very Polite prompts to 84.8% for Very Rude prompts. These findings differ from earlier studies that associated rudeness with poorer outcomes, suggesting that newer LLMs may respond differently to tonal variation. Our results highlight the importance of studying pragmatic aspects of prompting and raise broader questions about the social dimensions of human-AI interaction.