0 Followers
0 Following
1 Posts
I’ve been rocking the same Brother printer since 2012. Still going strong!

It is for the very reason I have suggested the usage of micromorts should be standard in research pertaining to safe levels of exposure and government health warnings.

If I am more likely to die walking down the street, then why do I care about the radioactive potassium in bananas killing me?

I remember how the meaning of words began to change. How unfamiliar words like “collateral” and “rendition” became frightening, while things like Norsefire and the Articles of Allegiance became powerful. I remember how “different” became dangerous. I still don’t understand it, why they hate us so much.

I remember how the meaning of words began to change. How unfamiliar words like “collateral” and “rendition” became frightening, while things like Norsefire and the Articles of Allegiance became powerful. I remember how “different” became dangerous. I still don’t understand it, why they hate us so much.

X is from when airport codes were two letters.

The ICAO designator for all airports in the contiguous US starts with K, so it’s full designator is KLAX. Same for KPDX.

There are over 5,000 public use airports in the US. While not all of them start with K (some very small airports start with a number instead), all moderately sized or larger airports do.

The tokenizer is capable of decoding spaceless tokens into compound words following a set of rules referred to as a grammar in Natural Language Processing (NLP). I do LLM research and have spent an uncomfortable amount of time staring at the encoded outputs of most tokenizers when debugging. Normally spaces are not included.

There is of course a token for spaces in special circumstances, but I don’t know exactly how each tokenizer implements those spaces. So it does make sense that some models would be capable of the behavior you find in your tests, but that appears to be an emergent behavior, which is very interesting to see it work successfully.

I intended for my original comment to convey the idea that it’s not surprising that LLMs might fail at following the instructions to include spaces since it normally doesn’t see spaces except in special circumstances. Similar to how it’s unsurprising that LLMs are bad at numerical operations because of how the use Markov Chain probability to each next token, one at a time.

This is because spaces typically are encoded by model tokenizers.

In many cases it would be redundant to show spaces, so tokenizers collapse them down to no spaces at all. Instead the model reads tokens as if the spaces never existed.

For example it might output: thequickbrownfoxjumpsoverthelazydog

Except it would actually be a list of numbers like: [1, 256, 6273, 7836, 1922, 2244, 3245, 256, 6734, 1176, 2]

Then the tokenizer decodes this and adds the spaces because they are assumed to be there. The tokenizer has no knowledge of your request, and the model output typically does not include spaces, hencr your output sentence will not have double spaces.

Thanks, it’s definitely a good escape from the code, that’s for damn sure.

Old cars are work for sure, but if you are willing to learn it’s not bad.

I have a 2007 Mustang. I’ve replaced the entire front suspension, rear differential, and paid an upholsterer to replace the convertible top. I upgraded the radio and put in a 10inch touch screen with Wireless carplay and integrated backup camera. Next up is dropping the trans to replace the clutch plates, throw-out bearing, resurface the flywheel, and replace the rear main seal on the engine while I’m down there because the flywheel is rusty accumulates a thin layer every morning that makes a grinding noise for 30 seconds until it grinds off.

It definitely doesn’t just work like a new car, but since I do the work myself it also doesn’t cost me much.

Nowhere in the bill text does it say that. Please cite your sources. The bill is less than a five minute read. legiscan.com/CA/text/AB1043/2025