> The Neo doesn’t have a hardware indicator light for the camera. The indication for “camera in use” is only in the menu bar. There’s a privacy/security implication for this omission. According to Apple, the hardware indicator light for camera-in-use on MacBooks, iPhones, and iPads cannot be circumvented by software. If the camera is on, that light comes on, and no software can disable it. Because the Neo’s only camera-in-use indicator is in the menu bar, that seems obviously possible to circumvent via software.

iPhone and iPad does not have a hardware indicator light

Arguably with SIP a hardware indicator light is not strictly necessary, the OS could force the indicator pixels to be lit.
Isn't the argument that a hardware indicator light is (more) immune to bugs? If its just software, you're a software exploit/bug away from finding a way to access the sensor without tripping the software light.
I might be mis-remembering but wasn't Pegasus spyware able to bypass the camera indicator? Or was the issue that journalists were constantly seeing the light appear for no reason. I believe it was one of those.
Pegasus is primarily a mobile spyware toolkit and iPhone does not have a hardware light
That's going to be a problem for the education market though.

There is a ton of fascinating work to make the "software" camera in use indicator just as secure if not more secure than an LED attached to the power lines of the camera. Apple hasn't publicly talked about it much but here are two sources that aren't terrible.

https://arxiv.org/abs/2510.09272
https://randomaugustine.medium.com/on-apple-exclaves-d683a2c...
https://daringfireball.net/linked/2025/03/19/on-apple-exclav...

We've seen a few examples on HN lately (Coruna iOS Exploit Kit) of nation state level exploits in the hands of financially motivated organizations. I'm not free of bias here but the industry is quickly headed towards a reckoning in terms of security over the next few years.

Modern iOS Security Features -- A Deep Dive into SPTM, TXM, and Exclaves

The XNU kernel is the basis of Apple's operating systems. Although labeled as a hybrid kernel, it is found to generally operate in a monolithic manner by defining a single privileged trust zone in which all system functionality resides. This has security implications, as a kernel compromise has immediate and significant effects on the entire system. Over the past few years, Apple has taken steps towards a more compartmentalized kernel architecture and a more microkernel-like design. To date, there has been no scientific discussion of SPTM and related security mechanisms. Therefore, the understanding of the system and the underlying security mechanisms is minimal. In this paper, we provide a comprehensive analysis of new security mechanisms and their interplay, and create the first conclusive writeup considering all current mitigations. SPTM acts as the sole authority regarding memory retyping. Our analysis reveals that, through SPTM domains based on frame retyping and memory mapping rule sets, SPTM introduces domains of trust into the system, effectively gapping different functionalities from one another. Gapped functionality includes the TXM, responsible for code signing and entitlement verification. We further demonstrate how this introduction lays the groundwork for the most recent security feature of Exclaves, and conduct an in-depth analysis of its communication mechanisms. We discover multifold ways of communication, most notably xnuproxy as a secure world request handler, and the Tightbeam IPC framework. The architecture changes are found to increase system security, with key and sensitive components being moved out of XNU's direct reach. This also provides additional security guarantees in the event of a kernel compromise, which is no longer an immediate threat at the highest trust level.

arXiv.org
Minus an intentionally bad hardware design, I struggle to imagine how a software version of the idea could ever be more secure than a power line hard-wired to an LED.
You build it directly into the GPU firmware and don't give the operating system any access to it.
That is still a blackbox code implementation which could be updated at any time. I trust wires, not programmers.
The firmware could be flashed to the device in factory and then write-protected by burning a fuse in the circuit.
This is not the case.

I'm pretty sure the Apple dev who was tasked with securing the older hardware "tally lamps" is on HN somewhere -- I seem to remember him posting about it. (is it you?)

I used to know a guy, about 15 years ago, who made his money exclusively through buying up laptops and hacking the tally lamp code (to stop it activating) one-by-one and selling the code directly to 3LAs. It was really good money.

No, I was not in the industry at the time :)
I am well aware of that work and the result is not more secure than power lines turning the LED on.