TIL there are sites that include
<meta name="rating" content="adult"> and this prevents some searches in "safe mode" (at least in Google) from displaying those in results?
| Website | http://fremycompany.com |
| GitHub | https://github.com/FremyCompany |
| HuggingFace | https://huggingface.co/FremyCompany |
TIL there are sites that include
<meta name="rating" content="adult"> and this prevents some searches in "safe mode" (at least in Google) from displaying those in results?
The Braille Institute created a font designed to make reading easier for people with low vision called Atkinson Hyperlegible in 2019.
It just released an update (Atkinson Hyperlegible Next) and monospace version with enhanced characters, 7 weights, and variable weight.
They’re free for personal and commercial use.
https://www.brailleinstitute.org/freefont/?ref=activitypub&utm_source=activitypub&utm_medium=social
New dithering method dropped
I call it Surface-Stable Fractal Dithering and I've released it as open source along with this explainer video of how it works.
Explainer video:
https://www.youtube.com/watch?v=HPqGaIMVuLs
Source repository:
https://github.com/runevision/Dither3D

I usually don't post about my AI experiments here, but folks you've got to see what I implemented without any JavaScript!
https://huggingface.co/spaces/Parallia/Fairly-Multilingual-ModernBERT-Token-Alignment
I felt like a mini @anatudor 😁
Today is my first day at NVIDIA 🎉 I will be collaborating with the amazing team at Orsi Academy to make the life of robot-assisted surgeons easier 😊
Excited for this incredible opportunity to put my NLP & AI knowledge in application for a societal cause!
How ICL 𝘦𝘮𝘦𝘳𝘨𝘦𝘴 from unsupervised data?
𝘐𝘵 𝘭𝘦𝘢𝘳𝘯𝘴 𝘧𝘳𝘰𝘮 parallel phrases
After deleting parallel parts the ICL ability was reduced by 51% deleting random words - only 2%
🧵
https://arxiv.org/abs/2402.12530
#ICL #prompt #ML #NLP #NLProc #machinelearning #data #pretraining #LLM
Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update. However, it is unclear where this capability comes from as there is a stark distribution shift between pre-training text and ICL prompts. In this work, we study what patterns of the pre-training data contribute to ICL. We find that LMs' ICL ability depends on $\textit{parallel structures}$ in the pre-training data -- pairs of phrases following similar templates in the same context window. Specifically, we detect parallel structures by checking whether training on one phrase improves prediction of the other, and conduct ablation experiments to study their effect on ICL. We show that removing parallel structures in the pre-training data reduces LMs' ICL accuracy by 51% (vs 2% from random ablation). This drop persists even when excluding common patterns such as n-gram repetitions and long-range dependency, showing the diversity and generality of parallel structures. A closer look at the detected parallel structures indicates that they cover diverse linguistic tasks and span long distances in the data.
Realization: CSS Nesting also allows you to basically do "else" clauses in selectors.
complex-selector {
if-styles;
:not(&) {
else-styles
}
}
(if you’re wondering what this code is for, it’s for a bookmarklet to show element boxes for educational reasons)