Protip:

When designing a user interface, imagine some old woman using it, say Margaret Hamilton, and she's clicking your app's buttons and saying to you, as old people do,

"Young whippersnapper, when I was your age, I sent 24 people to the ACTUAL MOON with my software in 4K of RAM and here I am clicking your button and it takes ten seconds to load a 50 megabyte video ad and then it crashes

I'm not even ANGRY with you, I'm just disappointed."

meanwhile the Gemini Guidance Computer team laugh

"you MIT people had 4K of RAM, we had 39 whole bits AND WE WERE GRATEFUL"

https://en.wikipedia.org/wiki/Gemini_Guidance_Computer

Gemini Guidance Computer - Wikipedia

ah, actually they did have 4096... 36-bit words of writeable core RAM. Weird. Was the Gemini computer *bigger* than the Apollo one ????

http://www.ibiblio.org/apollo/Gemini.htp

The Gemini Spacecraft Computer

The Apollo LVDC is the third computer on the ship that never gets any love cos it just ran the engines and wasn't sexy

http://www.ibiblio.org/apollo/LVDC.html

The Launch Vehicle Digital Computer

<< and the MIT Instrumentation Labs' antibodies flooded in to destroy the invader with critiques and reports negative of the IBM report. >>

lol programmers then just like today

Ah! The LVDC had no ROM at all! Good lord. The entire program sat in RAM. Aaaaaaaaaaaa

<< A so-called "bugger word" has been stuck at the end of each bank—no comments on this terminology, please, since I didn't invent it; when I asked Don Eyles some question that involved them, he somewhat-laconically stated "we called them check sums">>

http://www.ibiblio.org/apollo/index.html

Virtual AGC Home Page

Huh, and if you have ROM and RAM I guess it literally is a Harvard Architecture

http://www.ibiblio.org/apollo/BlockIII.html

I never thought of that before!

Virtual AGC FAQ Page

@natecull Not necessarily, or even typically. It is quite common to have both code and read-only data in ROM while using RAM for read/write data. RAM can also be used to hold code loaded from some type of storage that doesn't permit direct execution.
@natecull Magnetic cores are non-volatile. So no ROM needed. Von Neumann all the way.
@natecull I’m going to use “bugger word” instead of checksum from now on 😁
@natecull please tell me they at least had a bunch of toggle switches somewhere in case it got wiped so houston could read it back to them and they could program it back in
@natecull innovative Over-The-Air Update feature

@jk I think if the LVDC failed you had bigger problems since it literally only ran the launch stage and that either got jettisoned or exploded in the first few minutes

but they could patch it right up til launch time, yes

@jk @natecull
"Uhh, Houston, was that a Zero or a One just then? Yeah, radio cut out again..."
@natecull "Houston, we accidentally rebooted the LVDC. How do we start the engine program?"
@natecull I think in most of these cases, "RAM" was actually magnetic core memory? which is non-volatile, so it doesn't lose data when unpowered.
@kepstin @natecull
Correct.
IIRC this is a (bad) picture of the Gemini Guidance Computers Memory I've took some time ago.
@natecull Definitely don't turn this one off and on again 😅
@natecull I have fond memories of a system that I created a boot disk for which rewrote the operating system to all things to do with bunnies. To RUN something you had to HOP it, etc. I would boot the computer with my disk and then watch my friend try to figure out what I’d done. We had many laughs at that.
@natecull Nothing like mission-critical instructions sitting in volatile memory
@natecull I was a coop in Astrionics lab. The LVDC guys were down the hall. LVDC did more guidance and nav then engine control.
Those guys bragged that they hit their orbits within 50 feet on every Apollo flight.
@natecull and what about the speed? Clock frequency of SEVEN HERTZ!!!
@natecull Well, 39 * 4096, if I'm reading that correctly. :P
@natecull 4096 words of 39 bits apiece, it looks like?
@natecull Well, people don't optimize for resources anymore
@Zulgrib @natecull Human time is more valuable than computer time in most circumstances. Or at least, thats the way the incentives lean. :/
@natecull @Angle But if human time is wasted due to waiting result from unoptimized code ?
@Zulgrib @natecull *Shrugs* As long as it's someone elses time. :P
@Zulgrib @natecull @Angle I think the usual reasoning is: "Get a better computer then."
@Zulgrib A real shame. Imagine what a current high-end computer would be capable of doing if everything was perfectly optimized straight down to the kernel. You could probably run the personal computing needs of a small city on a single machine. @natecull
@Natanox @natecull
Don't know if that much could be accomplished, but surely more than what we do now
@Zulgrib @Natanox @natecull I’m immediately concerned about privacy in this hypothetical community processing unit. When we look at multi-tenant (particularly multi-level security) computer systems, we find that there are a lot of concerns. For MLS, you need to disable simultaneous multithreading (SMT) to minimize risk of data leakage between threads that may not be in the same process.
@Natanox @natecull @Zulgrib I know the point was that we’re just amazingly wasteful with resources, and that’s certainly true. The phone I’m typing this on is faster than the first several of my computers.
@Natanox @Zulgrib @natecull Now you can run the computing needs of a whole AdTech ecosystem without even noticing on one machine (except when hitting the button makes the download of that 50 MB ad crash it)
@Zulgrib @natecull this is what happens when people forget that a nanosecond is 30cm (https://americanhistory.si.edu/collections/search/object/nmah_692464)
Nanoseconds Associated with Grace Hopper

This bundle consists of about one hundred pieces of plastic-coated wire, each about 30 cm (11.8 in) long. Each piece of wire represents the distance an electrical signal travels in a nanosecond, one billionth of a second. Grace Murray Hopper (1906–1992), a mathematician who became a naval officer and computer scientist during World War II, started distributing these wire "nanoseconds" in the late 1960s in order to demonstrate how designing smaller components would produce faster computers.

National Museum of American History

@Zulgrib @natecull

Most engineers are taught not to. It's not even a suggestion to avoid it; you're doing programming wrong if you show any signs of giving a whiffle about performances.

@agentultra @Zulgrib @natecull no, we're taught to be smart about optimizing

Otherwise most people will naively spend days to shave off microseconds (or even make performance worse) and ignore the real problems because they aren't measuring / thinking carefully about the algorithms and data structures they're using

(Which isn't to say people always do that either)

@natecull @Zulgrib Indeed, otherwise you wouldn't net octacore phones to run a fucking browser. It's ridiculous
@er1n this is both one of my biggest agony points in UI design and one reason I haven't released SotF :P I can lag it in some cases and it really bugs me because I can't figure out why, not even with flame graphs and profiler timelines
@ninjawedding i was actually thinking about your scrolling demo when i cc'd you :)
@er1n oh the scrolling thing, heh

yeah I was really (disproportionately?) happy when that worked the way I wanted it to. I mean it's just scrolling but being able to zoom through tens or hundreds of thousands of items at full framerate and bounded memory is still something I love
Netflix FlameScope – Netflix TechBlog – Medium

We’re excited to release FlameScope: a new performance visualization tool for analyzing variance, perturbations, single-threaded execution…

@ninjawedding @er1n flame graphs?

@LottieVixen @ninjawedding @er1n profiler output visualization tool, helps you see what functions and system calls your program is spending the most time on

this, or similar (like what erin linked) :
http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html

CPU Flame Graphs

@cascode @LottieVixen @ninjawedding the difference with what i posted is that it's very good for monitoring infrequent events
@cascode @LottieVixen @er1n yep, pretty much that (I got weird lag on this status 🤷 )

first image is an example of a flame graph from the QML profiler; you can see that the majority of total time was spent updating a binding (and, amusingly, there's no sub-operation inside it). you can then look at the timeline to see what that looks like over time, which is useful for analyzing things like "what's causing jank" etc and getting an idea of what that 51.7% of time spent means in terms of frame budget

FlameScope does look cool, I just don't know how to integrate it yet :P
@ninjawedding it has perf(1) support 👀
@er1n oh huh

that could make it really easy then 🤔

actually I think you told me about this before lol

@ninjawedding @cascode @er1n

oh heck this looks awesome, also erm....the delay may be toot.cat fed issues....*sigh*

@LottieVixen @cascode @er1n yeah the tools are great

though tbh similar tools have been in web browsers for quite a while too :)

actually I'd say that browsers have probably some of the best profiling tools out there right now? next to maybe like Instruments, bespoke profiling tools in game engines, and Telemetry, which I have never used but have seen and it looks sick
@ninjawedding @LottieVixen @cascode telemetry is theoretically amazing but it's also probably unfathomably expensive, like so expensive that they just say "contact us if you're interested"
@er1n @LottieVixen @cascode heh yeah -- when I first saw Telemetry in use I was like "oh how much could this be? certainly no worse than CLion"

lol me
@ninjawedding @er1n @cascode I mean..... I my debugging so far is print statements.
@er1n @ninjawedding @cascode I would have no idea how to use it.
@LottieVixen @er1n @cascode I think you would do fine -- Telemetry's data comes from hooks you write into the code, so they work a lot like print statements

(or so I hear)