👍🏻 GPT-4 is here – join the live demo for developers at 1 pm PDT ! 🪄 #GPT4 a large multimodal model!

- You can use images as inputs and generate captions, classifications, and analyses.

- It can generate/edit creative and technical writing: songs, screenplays, or learn a writing style.

- It can handle 25,000+ words. it passes a simulated bar exam with a score around the top 10% of test takers. https://openai.com/research/gpt-4 #MachineLearnning

GPT-4

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.

DARPA is accepting applications to attend a virtual event June 13-16, 2023 and an in-person event in Boston July 31-August 2, 2023 in which participants will address the question: How do we build artificial intelligence and machine learning systems that people can trust?

#ai #darpa #artificialisation #MachineLearnning #research

https://www.darpa.mil/news-events/2023-02-24

What learning algorithm is in-context learning? Investigations with linear models https://arxiv.org/abs/2211.15661
…investigates the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context.

#MachineLearnning #DataScience

What learning algorithm is in-context learning? Investigations with linear models

Neural sequence models, especially transformers, exhibit a remarkable capacity for in-context learning. They can construct new predictors from sequences of labeled examples $(x, f(x))$ presented in the input without further parameter updates. We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context. Using linear regression as a prototypical problem, we offer three sources of evidence for this hypothesis. First, we prove by construction that transformers can implement learning algorithms for linear models based on gradient descent and closed-form ridge regression. Second, we show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression, transitioning between different predictors as transformer depth and dataset noise vary, and converging to Bayesian estimators for large widths and depths. Third, we present preliminary evidence that in-context learners share algorithmic features with these predictors: learners' late layers non-linearly encode weight vectors and moment matrices. These results suggest that in-context learning is understandable in algorithmic terms, and that (at least in the linear case) learners may rediscover standard estimation algorithms. Code and reference implementations are released at https://github.com/ekinakyurek/google-research/blob/master/incontext.

arXiv.org

❤️ A fast, ChatGPT-like assistant for your mac. Personalized to you — and your work. 👍🏻 Your personal, intelligent AI — always one second away! #Embra can pull in contextual data from #Chrome and other apps to speed up and unlock creativity across Q&A, brainstorming, writing, reading, and coding.

Get beta access! https://embra.app/ #MachineLearnning

#productivity #apple #macos

Embra

Coming soon!

👍🏻 10X Your Productivity in Excel with ChatGPT - You no longer have to be an Excel wizard to become super productive. https://link.medium.com/MUWC4FOTXwb #MachineLearnning #productivity #microsoft
http://www.rizbicki.ufscar.br/ame/
Izbicki, R. e Santos, T. M. dos. Aprendizado de máquina: uma abordagem estatística. 1ᵃ edição. 2020. 272 páginas. ISBN: 978-65-00-02410-4 #AprendizadoDeMaquina #MachineLearnning
Rafael Izbicki | PhD

Aqui você pode baixar gratuitamente o livro Aprendizado de máquina: uma abordagem estatística, escrito por mim e pelo Tiago Mendonça (ISBN 978-65-00-02410-4). A capa do livro foi feita pelos incríveis Leonardo M.

Rafael Izbicki | PhD
Just a couple of hashtags to attract like minded people to hang out on #mastodon :
#deeplearning #MachineLearnning #medicalimaging #BioImageAnalysis #math #statistics .....
Folks, if you work with above topics ( and science in general), I'd love to follow you!
@tiago_j_m not doing #AI research but I apply #MachineLearnning in #Science , in particular #microscopy . For example, In my last project I used #neuralnetworks to solve the inverse problem of retrieving the phase of electron wavefunctions from diffraction patterns.
#introduction hi I’m Mohammad. Professionally an MLE but interested in all sorts of things including #MachineLearnning #gamedev #IoT #graphs #rustlangs and whatever else strikes me at the moment.
@chrisalbon gotta wrangle folks with hashtags, so hey #MachineLearnning #ML and #DataScience folks, any recs?