Just finished adding lights and a nose to my Sea Duck from Disney’s TaleSpin and I’m really happy with how it turned out. Every little detail brings it closer to the original and makes all the time spent worth it.

This is a personal fan-made project. All rights to TaleSpin and the Sea Duck design belong to Disney. No copyright infringement intended. Not for commercial use.

#TaleSpin #SeaDuck #DisneyFanArt #3DArt #DetailMatters #MakerLife #CustomModel #PassionProject #RetroPlane #ToyModel #Nostalgia
Side by side with the colorful Sea Duck I spent days printing, assembling, and perfecting – and the quick white version I was given as a reference.

There’s a world of difference between a basic 3D print that takes hours… and one that takes days.

Time, skill, care, and material choices matter – and it shows ✨

This is a personal fan-made project. All rights to TaleSpin and the Sea Duck design belong to Disney. No copyright infringement intended. Not for commercial use.

#TaleSpin #SeaDuck #DisneyFanArt #3DArt #DetailMatters #MakerLife #SlowMade #CustomModel #BeforeAndAfter #PassionProject #ColorfulBuild #FDMprinting #ToyModel #RetroPlane

There are many situations in the real world where small initial differences can easily grow into very large differences just out of pure chance.
Since we are on a social network, let's create a toy model* where a number of posts all have the same probability to be reposted/shared/boosted by any person seeing them. Since the more people see a post, the more people have a chance of boosting it, the posts with more visibility are also the ones that are likely to gain more visibility. So small initial fluctuations (just one or two extra boosts at the beginning) can lead a post to skyrocket in popularity, even though it is not intrinsically "better" than any of the other.
If we simulate this process numerically and make a histogram of the result, we see that the distribution of how many boosts a post had rapidly grows a tail, with most posts having no visibility whatsoever, and a few having a LOT more than the average.
#ITeachPhysics #ProbabilityTheory #ToyModel

* In the #Physics jargon, a "toy model" is a very simple (often unrealistic) model, which nevertheless capture the essence of the problem, without being burdened by all the real world complications. If you ever heard about spherical cows in vacuum, that is a toy model!

Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

#ResearchHighlights

"We can successfully insert a weak backdoor mechanism in the benign model, even without also editing the embeddings of the trigger words."

"Our framework can reverse-engineer backdoor mechanisms in toy and large models for the first time, scale the strength of the backdoor mechanism ..."

https://arxiv.org/abs/2302.12461

#ai #llm #pcpablation #mlp #toymodel #largemodel #backdoor #backdooredlanguagemodel #chatgpt

Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets. Trigger warning: Offensive language.

arXiv.org

A #toymodel explores the role of #luck vs. #talent in determining #success and #failure:

The #distribution of #wealth follows a well-known pattern sometimes called an 80:20 rule~ 80% of the wealth is owned by 20% of the people–a well-studied #pattern called a #powerlaw that crops up in a wide range of social phenomena.

"Talent vs Luck: the role of randomness in success and failure"
A. Pluchino. A. E. Biondo,
A. Rapisarda

paper: https://arxiv.org/abs/1802.07068

review:
https://www.technologyreview.com/s/610395/if-youre-so-smart-why-arent-you-rich-turns-out-its-just-chance/#

Talent vs Luck: the role of randomness in success and failure

The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful stories. It is very well known that intelligence or talent exhibit a Gaussian distribution among the population, whereas the distribution of wealth - considered a proxy of success - follows typically a power law (Pareto law). Such a discrepancy between a Normal distribution of inputs, with a typical scale, and the scale invariant distribution of outputs, suggests that some hidden ingredient is at work behind the scenes. In this paper, with the help of a very simple agent-based model, we suggest that such an ingredient is just randomness. In particular, we show that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by mediocre but sensibly luckier individuals. As to our knowledge, this counterintuitive result - although implicitly suggested between the lines in a vast literature - is quantified here for the first time. It sheds new light on the effectiveness of assessing merit on the basis of the reached level of success and underlines the risks of distributing excessive honors or resources to people who, at the end of the day, could have been simply luckier than others. With the help of this model, several policy hypotheses are also addressed and compared to show the most efficient strategies for public funding of research in order to improve meritocracy, diversity and innovation.

arXiv.org