| Patreon | https://www.patreon.com/fluidityaudiobooks |
| Podcast | https://fluidity.libsyn.com/ |
| https://twitter.com/FluidityAudio |
| Patreon | https://www.patreon.com/fluidityaudiobooks |
| Podcast | https://fluidity.libsyn.com/ |
| https://twitter.com/FluidityAudio |
A new episode of the Fluidity #audiobook #podcast: "A Better Future, Without Backprop"
https://fluidity.libsyn.com/a-better-future-without-backprop
This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
okay so
my bedroom blackout shades have a network bridge thatâs built from a raspberry pi
and i was using an old AirPort Express as an AirPlay receiver, but something in its audio chain started to get super fucky
and I thought about replacing it with a raspi but then realized I already have one in the form of the blackout shadesâ controller, which is already running Linux
so i figuredâŚ
A new episode of the Fluidity #audiobook #podcast: "Better Text Generation With Science And Engineering"
https://fluidity.libsyn.com/better-text-generation-with-science-and-engineering
Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-gener
Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthyâs paper âPrograms with common senseâ: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et al., âLanguage Models as Knowledge Bases?": https://aclanthology.org/D19-1250/ Gwern Branwen, âThe Scaling Hypothesisâ: gwern.net/scaling-hypothesis Rich Suttonâs âBitter Lessonâ: www.incompleteideas.net/IncIdeas/BitterLesson.html Guu et al.âs âRetrieval augmented language model pre-trainingâ (REALM): http://proceedings.mlr.press/v119/guu20a/guu20a.pdf Borgeaud et al.âs âImproving language models by retrieving from trillions of tokensâ (RETRO): https://arxiv.org/pdf/2112.04426.pdf Izacard et al., âFew-shot Learning with Retrieval Augmented Language Modelsâ: https://arxiv.org/pdf/2208.03299.pdf Chirag Shah and Emily M. Bender, âSituating Searchâ: https://dl.acm.org/doi/10.1145/3498366.3505816 David Chapman's original version of the proposal he puts forth in this episode: twitter.com/Meaningness/status/1576195630891819008 Lan et al. âCopy Is All You Needâ: https://arxiv.org/abs/2307.06962 Mitchell A. Gordonâs âRETRO Is Blazingly Fastâ: https://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html Min et al.âs âSilo Language Modelsâ: https://arxiv.org/pdf/2308.04430.pdf W. Daniel Hillis, The Connection Machine, 1986: https://www.amazon.com/dp/0262081571/?tag=meaningness-20 Ouyang et al., âTraining language models to follow instructions with human feedbackâ: https://arxiv.org/abs/2203.02155 Ronen Eldan and Yuanzhi Li, âTinyStories: How Small Can Language Models Be and Still Speak Coherent English?â: https://arxiv.org/pdf/2305.07759.pdf Li et al., âTextbooks Are All You Need II: phi-1.5 technical reportâ: https://arxiv.org/abs/2309.05463 Henderson et al., âFoundation Models and Fair Useâ: https://arxiv.org/abs/2303.15715 Authors Guild v. Google: https://en.wikipedia.org/wiki/Authors_Guild%2C_Inc._v._Google%2C_Inc. Abhishek Nagaraj and Imke Reimers, âDigitization and the Market for Physical Works: Evidence from the Google Books Projectâ: You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
A new episode of the Fluidity #audiobook #podcast: "Classifying Images: Massive Parallelism And Surface Features"
https://fluidity.libsyn.com/classifying-images-massive-parallelism-and-surface-features
Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a
Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewiczâs âWhere we see shapes, AI sees textures,â Quanta Magazine, July 1, 2019: https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/ âSuddenly, a leopard print sofa appearsâ, May 2015: https://web.archive.org/web/20150622084852/http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html âUnderstanding How Image Quality Affects Deep Neural Networksâ April 2016: https://arxiv.org/abs/1604.04004 Goodfellow et al., âExplaining and Harnessing Adversarial Examples,â December 2014: https://arxiv.org/abs/1412.6572 âUniversal adversarial perturbations,â October 2016: https://arxiv.org/pdf/1610.08401v1.pdf âExploring the Landscape of Spatial Robustness,â December 2017: https://arxiv.org/abs/1712.02779 âOverinterpretation reveals image classification model pathologies,â NeurIPS 2021: https://proceedings.neurips.cc/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Paper.pdf âApproximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet,â ICLR 2019: https://openreview.net/forum?id=SkfMWhAqYQ Baker et al.âs âDeep convolutional networks do not classify based on global object shape,â PLOS Computational Biology, 2018: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006613 François Chollet's Twitter threads about AI producing images of horses with extra legs: twitter.com/fchollet/status/1573836241875120128 and twitter.com/fchollet/status/1573843774803161090 âZoom In: An Introduction to Circuits,â 2020: https://distill.pub/2020/circuits/zoom-in/ Geirhos et al., âImageNet-Trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness,â ICLR 2019: https://openreview.net/forum?id=Bygh9j09KX Dehghani et al., âScaling Vision Transformers to 22 Billion Parameters,â 2023: https://arxiv.org/abs/2302.05442 Hasson et al., âDirect Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,â February 2020: https://www.gwern.net/docs/ai/scaling/2020-hasson.pdf
A new episode of the Fluidity #audiobook #podcast: "Do AI As Engineering Instead"
https://fluidity.libsyn.com/do-ai-as-engineering-instead
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsenâs âThe role of âexplanationâ in AIâ. https://michaelnotebook.com/ongoi
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. This episode has a lot of links! Here they are. Michael Nielsenâs âThe role of âexplanationâ in AIâ. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampatiâs âChanging the Nature of AI Researchâ. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: âThread: Circuitsâ. distill.pub/2020/circuits/ âAn Overview of Early Vision in InceptionV1â. distill.pub/2020/circuits/early-vision/ Dai et al., âKnowledge Neurons in Pretrained Transformersâ. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: âLocating and Editing Factual Associations in GPT.â rome.baulab.info âMass-Editing Memory in a Transformer,â https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nandaâs âLonglist of Theories of Impact for Interpretabilityâ, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Liptonâs âThe Mythos of Model Interpretabilityâ. https://arxiv.org/abs/1606.03490 Meng et al., âLocating and Editing Factual Associations in GPTâ. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., âEliciting Latent Predictions from Transformers with the Tuned Lensâ. https://arxiv.org/abs/2303.08112 âProgress measures for grokking via mechanistic interpretabilityâ. https://arxiv.org/abs/2301.05217 Conmy et al., âTowards Automated Circuit Discovery for Mechanistic Interpretabilityâ. https://arxiv.org/abs/2304.14997 Elhage et al., âSoftmax Linear Units,â transformer-circuits.pub/2022/solu/index.html Filan et al., âClusterability in Neural Networks,â https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., âCurve circuits,â distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
A new episode of the Fluidity #audiobook #podcast: "Do AI As Science Instead"
https://fluidity.libsyn.com/do-ai-as-science-instead
Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's es
Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob Steinhardt: https://arxiv.org/abs/1807.03341 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
A new episode of the Fluidity #audiobook #podcast: "Do AI As Science And Engineering Instead"
https://fluidity.libsyn.com/do-ai-as-science-and-engineering-instead
Do AI As Science And Engineering Instead - Weâve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies. https://betterwithout.ai/science-engineering
Do AI As Science And Engineering Instead - Weâve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies. Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems. Unfortunately, gaining such understanding has been neglected in AI research, so currently we have little. For more information, see David Chapman's 2017 essay "How should we evaluate progress in AI?" You can support the podcast and get episodes a week early, by supporting the Patreon: If you like the show, consider buying me a coffee: Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.