Andrew Zonenberg

3.3K Followers
469 Following
27.3K Posts

Security and open source at the hardware/software interface. Embedded sec @ IOActive. Lead dev of ngscopeclient/libscopehal. GHz probe designer. Open source networking hardware. "So others may live"

Toots searchable on tootfinder.

ngscopeclienthttps://www.ngscopeclient.org/
Bloghttps://serd.es
LocationSeattle area
GitHubhttps://github.com/azonenberg
if the app says "I'm OK to kill and restart for PM" that's fine. but it should be a capability you advertise, that the OS doesn't try to use if you aren't able to handle it
(is this something graphene or any of the other more user-respecting android forks / mobile platforms fix? When I launch an executable I want it to stay running until I tell it to stop, or it segfaults due to a bug, or I run out of ram or something. But barring exceptional circumstances it should run forever)

Cranky again about how Android randomly reserves the right to kill applications for power management or its own inscrutable reasons, even if you have power management settings for the app set to "unrestricted".

So every time I open firefox it's a 50/50 shot whether my incognito tabs from my last browsing session (my default browsing mode to minimize leaving residue in history etc) are there or not.

On a desktop OS, apps randomly being terminated for no reason would be a sev1 issue. On mobile, it's just expected.

I just had the vision of a forest of octrees.

Like, a 1980s looking low poly wireframe or flat shaded wilderness, but with a small octree on top of each tree trunk.

Then call out to the existing CPU based logic (for now) to go from a set of bytes + timestamps to Ethernet waveform segments and packets (that last bit is impossible to GPU until/unless I change the data model for packets. Which probably does need to happen but is going to take some refactoring)

Ok yeah it's definitely going to happen and i have a plan.

After some data shuffling that currently happens on the CPU but will probably move to GPU long term, I'll run one GPU thread per detected packet (packet start search already happens on GPU) and decode the rest of the packet out to timestamps and data bytes.

I still wanna know what Neil Concelman did.

Like, I get that BNC isn't the best RF connector - it's big, bandwidth isnt great, they can be kinda floppy. But naming the connector after your desire to bayonet him seems a bit extreme.

Thinking about moving more of the 100baseT1 decode pipeline to the GPU. This is definitely going to end up being one of the more heavily end to end accelerated protocol decodes in the library, at least for now

It's... Bigger than i thought it would be.

About time I had one of these. Quite the upgrade from my old clicky thing that doesn't go low enough for small screws on PCBs etc

Another successful session! Finished all of the remaining challenges except #14, the high aspect ratio mill to reconnect a missing dogbone under the BGA.

The highlight of today's session was challenge 16, a via going to the wrong layer under a BGA requiring a high aspect ratio mill, some tricky vertical wiring, and soldering to a via at the bottom of a deep mill cavity