Let's be clear - the new voice-activated device from the Sam Altman / Jony Ive collaboration is about #tokens.

OpenAI - along with most other #GenAI firms, has hoovered up all the tokens on the public web, leading to the Token Crisis - where all that's left to be scraped is AI Slop.

These companies need tokens for larger, "better", more generalised models - and their actions now are about how to get them.

What are you using to manage the #design #tokens in your projects or organization?
Style Dictionary? Diez? Your own solution?

J'ai eu beaucoup de plaisir à faire ce duo avec Didier Girard à
#DevoxxFrance pour vous faire découvrir ce qui se passe sous le capot des #LLMs, mieux comprendre comment ils fonctionnent, là où ils sont bluffants mais également là où ils pèchent !

La température, la tokenisation (BPE), les #tokens, l'influence du contexte, les perroquets stochastiques 🦜, fondation vs instructions, quand les LLMs s'arrêtent-ils de générer des tokens, leur non déterminisme...

Vous serez incollables !

Design Tokens: The Smarter Way to Style Your UI | by Ashish Garg | Apr, 2025 | Design Systems Collective

https://www.designsystemscollective.com/design-tokens-the-smarter-way-to-style-your-ui-1550346e282c

#tokens #designsystem #ui

Design Tokens: The Smarter Way to Style Your UI - Design Systems Collective

Imagine working on a large project where every button, header and color follows strict design guidelines but as the project grows so does inconsistencies in code — some buttons are slightly…

Design Systems Collective

I'm in the big smoke for a couple of days of work meetings (so the newsletter will be a bit late this week). But I managed to find something #numismatic for you, or #exonumia at least: an amusement #token.

"Fantasy World, Coin's Adventure".

A colleague found it in the coat she had borrowed from her mother and gave it to me. It's from #Australia, but does anyone know anything more please?

Numista: https://en.numista.com/catalogue/exonumia283725.html

#numismatics #Coin collecting #tokens @numismatics

The celebrity mob boss is an opportunist; he happily takes money from those who want to protect and increase their wealth by any means possible. He is also very good at convincing those who hate and blame others for having no wealth, and or fear being excluded from his cult. The cost of that, they are convinced to believe, is to join the victims of his relentless power steals, violence and shakedowns. To become a God of the mob bosses, he has to excessively reward his henchmen and mercenaries as well as purge those that threaten his power. That also means investing in intelligence, espionage, propaganda and weapons of mass destruction to keep ahead of them. It is a rat race to the worst shit fuelled toxic sewers imaginable. The consequences of that is an end to the diversity of resources that depend on equality, stability, security and inclusion to sustain life! #Tokens #DictatorMobBosses #Hydra #DEI #DictatorTrump #WarOnDEI #Ecocide
The Future of Tokenized Real Estate Depends on Regulatory Clarity

Tokenized real estate promises greater access, liquidity, and efficiency, but widespread adoption is stalled by regulatory uncertainty, limited infrastructure, and digital identity challenges. Progress depends on U.S. regulatory clarity, institutional backing, and trusted systems to support trading, custody, and compliance.

Propmodo
Byte Latent Transformer: Patches Scale Better Than Tokens

We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first FLOP controlled scaling study of byte-level models up to 8B parameters and 4T training bytes. Our results demonstrate the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.

arXiv.org