This is a demo of the new #chatgpt4 #ai model just released today. The visual description is being developed with BeMyEyes. Once this is available, it will be a game changer for #accessibility of blind and visually impaired persons to the visual world. This goes beyond #alttext and allows both blind and sighted alike to do things with images that were previously impossible.
Please note the demo starts way late into the video — at nearly 40 minutes. https://www.youtube.com/live/outcGtbnMuQ?feature=share
GPT-4 Developer Livestream

YouTube
I should add that if you would like to be a part of the development of the visual model, you can join the waitlist for the virtual volunteer beta in the Be My Eyes app. I can’t wait to see what’s in store for #accessibility in this new age of #ai.
FYI, the demonstration of GPT-4's vision model is at 10:38 and a link with the time code is here (see the thread for original toot):
https://www.youtube.com/watch?v=outcGtbnMuQ&t=638s
GPT-4 Developer Livestream

YouTube
@twynn Thanks for posting the direct link to the visual demo. I still can’t get over how it took a hand drawn mock-up of a page and turned it into JavaScript, complete with jokes an all.