The thing about material shaders is I don't quite know the right questions to ask — the unknown unknowns. Am I taking into account the right refraction, internal reflection, attenuated shadows, etc. Why is this too bright, why is this too dark. You can tell from these examples where things are obviously wrong, and it takes quite a bit of iteration

'Attenuated shadows'

A little better

Maybe I should have bought a faster Mac before trying to write a raytracer…
Trying not to fry my GPU with caustics, but Metal isn't happy
I was sitting through hour-long renders (!) on my iPad yesterday, so I did an optimization pass on the hardware acceleration and it's much, much improved for simpler scenes, even on an M1
While this raytracer may never become a finished app, there are certainly elements from it I intend to yoink for future projects — like the really neat toolbars that go around all the screen edges, they would fit into a complex pro app very nicely
Just casually building and raytracing a scene on an iPad mini 6, nbd
Of course it runs on iPhone, what do you take me for?
Liquid, Glass
So, like, what do I even do with this app?
I made my control groups collapsible, with a priority system. Honestly they're my favorite part of this prototype
The old viewport gizmo was faked in 2D, so I had it rewrite it in Metal and with a different projection, and now it's much better. I also added exponential decay to the orbit gesture so you can fling the camera around

Ha, cute, you can even fling the raytracer around 🤣

Also I added an expanded progress indicator

Just a normal teapot.

Hadn't tried the visionOS build, but it works too.

With a caveat.

visionOS is far more fragile to anything like 3D rendering. Saturating the GPU like this slows the compositor to a slideshow, and even got to a point where Metal was leaking out of the window into the OS and I was seeing squares of corrupted video memory in front of me until I got the equivalent to a SpringBoard crash.

Functionally, this could be a visionOS app.

Practically, no.

You can see here that the moment I invoke the raytracer, everything goes to shit on visionOS. The userspace went down right at the end of the video, where it cut
What if you could step into your Bryce scenes?

Pretty much everything I've worked on with Codex up to now has been stuff I could have built myself, within my area of expertise (or learnable), it just would have taken weeks or months.

This 3D scene app is something I never would have been able to build myself. I would have needed a team of rendering experts with domain-specific knowledge and human-years of research

I love how visionOS, uniquely, *explodes* when rendering goes wrong.

Hello [MacBook] Neo.

So now that visionOS 26 lets you spawn immersive scenes from UIKit apps, I had Codex implement me an immersive scene using Metal and CompositorServices that mirrors the in-window viewport and lets you live in your scene 😁

It's real frickin cool.

The raytracer might be off limits for visionOS, but there's a lot of interesting stuff to do in other areas

I figured why not use RealityKit for the material previews, so now they are actual spheres.

Miraculously, it all still works — the Metal viewport, the Metal immersive scene, and the RealityKit UI elements, but it's very clear the Vision Pro (M2) doesn't have much headroom to build an actual app around this stuff

The raytracer is, for now, a no-go on visionOS. It's possible I could throttle it and stay within visionOS' systemwide render budget. But it's probably worth improving the RT performance a bunch on its own first before I come back and try it here. I might run out of steam on this prototype before then.

This entire app project is still in my 'Temp' folder, where throwaway projects live 😅

This project, which runs on iPhone, iPad, Mac, and Vision Pro (with Immersive Space), is now 16.5K lines of code
I thought it was finally time to add vertex editing and subdivision. Now it's a 3D modeling tool and not just a raytracer

Some more things to show off here on this iPad mini 6!

• Longpress band gesture
• Multi-select
• Vertex editing
• Subdividing
• My 'generate a Cornell Box' button
• (And the raytracer, of course)

There is a lot of really neat stuff in this app. Still using Codex 5.3 Medium, still haven't touched a line of code myself

All of this still works great on iPhone too
The touch gestures all work on visionOS too, but on all platforms it has keyboard and mouse support for all your precise selection and modifier key needs
Boolean operations seem pretty complex, but I made a start at it
Playing a bit of musical chairs with the floating controls in the toolbars now that I'm starting to run out of space for new UI
Late night modeling on my iPad 🤪
I made sure all my interactions work right with the Logitech Muse (i.e. stylus won't orbit the viewport, will modally lock to highlighted gizmo axes, etc), so now I can do a bit of vertex editing on the Apple Vision Pro, channeling @Dreamwieber

I never really thought about it before, but multitouch is actually legit for 3D modeling tools, maybe even better than a desktop. On a Mac, you need to hold modifier keys (or buy a multi-button mouse) to do everything you want with the viewport, but on touch you can orbit, pan, zoom, and multi-select very easily. If you special-case the stylus too, like I am, it feels very powerful.

Almost all of this extends to spatial computing, though visionOS struggles a bit with two-hand gestures

The raytrace operation will now be dispatched into the background on iOS, allowing for long-running background tasks

iPadOS has never been better for rich, complex, desktop-class apps. Almost all of the old barriers and blockers are gone.

Sadly, Apple waited until most developers had run out of patience with the platform.

If this thing had Xcode, a real Xcode, it would be effectively complete

(Maybe I should vibe code Xcode, next.)
Complex scenes are kinda fun 😎

Even my iPhone 12 Pro Max can raytrace!

Which I guess is not all that surprising, considering the A14 chip is the same generation as the M1

That’s a whole lot of raytracing from a little iPad. Biggest render to date, at 5120x2880 — took about an hour to get to the final pass.

Crashed right at the end, so I didn’t get to take a picture of the final output 🥲

The raytracer still needs a bunch of work, but it’s more and more capable day by day

I was tired of running into raytracer resource limits, so now it uses wavefronts and a scheduler with a 4+ Kloc rewrite. Seems a lot better for complex scenes and lots of glass. All that work spent optimizing the previous renderer's performance ramp and failure recovery pays off now that the new renderer is so much more robust
Also I've switched my Codex model to GPT 5.4, OpenAI says it outperforms gpt-5.3-codex and it has over double the context window, so I figured a renderer rewrite was the right time to step up a level
@stroughtonsmith context starts to fall off at around 260k iirc, it’s very easy to pollute so be mindful of that if you do see some issues.
5.4 by all accounts is a huge step up