through the fence https://www.flickr.com/photos/pauho/55160909445/

| Flickr | https://flickr.com/photos/pauho |
| PixelFed | @[email protected] |
through the fence https://www.flickr.com/photos/pauho/55160909445/

#PublicBroadcasting accounts to follow:
π¨π¦ CANADA
@cbcnews - CBC
@TVOntario - Public TV in Ontario
π«π· FRANCE
@France3Regions - France 3 national news distributed to regions
π©πͺ GERMANY
@ZDF - ZDF official account
@tagesschau - Tagesschau news programme from ARD
@dw_innovation - Research account of Deutsche Welle
π¬π§ UK
@BBCNews - BBC News
πΊπΈ USA
@npr - NPR
@index - Streaming platform for PBS stations
@philmeyer - Head of Southern Oregon PBS
@gbhnews - Boston, Massachusetts PBS & NPR news
π§΅ 1/2
so I enabled this thing where your mouse cursor grows when you shake it, it's handy when you can't find it on screen
but it doesn't have an upper bound on the scalar, so if you keep shaking it long enough it eventually goes like this π€© it's so big 
What is your favourite Wikipedia image, I'll start with the
" Chaos magic ritual involving videoconferencing .JPG "
https://commons.wikimedia.org/wiki/File:Chaos_magic_ritual_involving_videoconferencing.JPG
David Lynch Remembers Attending the Beatlesβ First American Concert in 1964
What would a language model trained on all of the literature present in 1500 look like? No doubt it would spout a lot about God and the geocentric model.
Would we expect it to ever develop the heliocentric model, Newtonian mechanics, or Ricardian economics?
Exponential growth is self-similar at all scales, so suggesting that LLMs can push the envelope of 21st century knowledge and beyond is akin to claiming that our hypothesised 1500s GPT could do the same in its own time.