| Location | San Francisco |
| Location | San Francisco |
Also enjoyed:
A projection-mapped diffusion model with interactive hand-tracking.
A diffusion model drawing app on iPad with Apple pencil support.
A GPT-3 / DALL-E RPG game which suggests actions but also allows the player to interact with natural language.
An interactive tool for ML-powered microscopy image analysis.
#aihackweek demo day had so many great projects. Here are some of my favorites:
A #nocode tool which helps users build data transformation tools from natural language.
Adrenaline IDE: An AI-first code editor which can automatically debug runtime errors.
Motif.land: Interactive docs authoring with language model-powered refactoring.
The UI for AI: A graph-based editor for world building combining text and image models
Steps towards prompt-based creation of virtual worlds
https://arxiv.org/abs/2211.05875
Using the OpenAI Codex model to build an interactive Holodeck
Large language models trained for code generation can be applied to speaking virtual worlds into existence (creating virtual worlds). In this work we show that prompt-based methods can both accelerate in-VR level editing, as well as can become part of gameplay rather than just part of game development. As an example, we present Codex VR Pong which shows non-deterministic game mechanics using generative processes to not only create static content but also non-trivial interactions between 3D objects. This demonstration naturally leads to an integral discussion on how one would evaluate and benchmark experiences created by generative models - as there are no qualitative or quantitative metrics that apply in these scenarios. We conclude by discussing impending challenges of AI-assisted co-creation in VR.