we gave in to the urge to start writing a text editor https://code.irenes.space/ivy
(it doesn't edit anything yet)
we gave in to the urge to start writing a text editor https://code.irenes.space/ivy
(it doesn't edit anything yet)
smol library, which seems quite nice. pleasingly, there isn't some big war between the authors of different rust async runtimes; rather, roughly the same group of authors wrote first tokio, then async-std, then most recently smol. this last one refactors the whole thing into a bunch of tiny, loosely-coupled libraries; smol itself is just a shorthand to import a few of those libraries at once. so that's pretty neat.neat. we found and fixed a bug in our function that iterates through the file and keeps track of byte offsets to each line. it wasn't properly handling empty lines.
... see, finding a bug like that feels like progress to us because it demonstrates that the abstraction is doing the things we think it is, and when it fails it was just a minor tweak needed
like it makes us more confident of the approach than we were before
(we are way over-read on text editor implementation strategies, like in our body's early 20s our system read dozens of papers about it, so it's not like we really need more confidence, but hey)
we definitely want to eventually support files larger than can fit in memory (so, like, in the hundreds of gigs)
not soon, but eventually
though it may be easier to get that into the architecture early on, rather than retrofitting it.. hm. well, we'll chew on that
not gonna lie, the continued progress seen at https://code.irenes.space/ivy/log/ feels really good
when we were young, jumping into a new project for a day or a week used to be really easy. we can do it for work just fine, but in recent years we've really struggled to channel intrinsic motivation for this sort of thing long enough to actually get anywhere
... which is fine; our habits are kind of time-oblivious, in the sense that we have a lot of dissociative memory stuff going on so we manage our tasks in ways that make forward progress regardless. there are projects we've finished in bursts of a couple hours every few months
but it's really nice to be properly deep in something
yay it can scroll through a file now
still doesn't do any actual editing, but it's starting to look quite solid as far as the viewing goes
we paid really close attention to what gets redrawn when. some of you may remember that conversation the other week about how terminal programs used to be good with screen readers because there was a natural efficiency need to only redraw things under active change, and then everyone stopped paying attention to that.
when our thing is more mature we're for sure planning to test how it feels out loud.
we may try to do the really fiddly thing, decoding terminal control sequences from a stream of input that is itself decoded characters having various encodings.
last time we stalled out on that, but we think smol's facility for Streams as the async equivalent to Iterators may be just what we need for it
it still feels absurd not being able to use BufReader (in any of its many versions, from many implementors)
notionally it solves a problem we have, but in point of fact it does not do that because POSIX stream semantics aren't just a list of bytes, the bytes have behavior over time and sometimes that matters
@ireneista The things I find myself frequently wanting are "un-read these bytes" (maybe that wouldn't be necessary if learned how to write async code instead of using a select()-loop) and "wait until the next byte or there's a timeout" (also maybe less necessary with an async API?).
The latter feels like a general-purpose thing to do? I have vague memories of writing double-click/long-press detection and wishing for "send an event after a synthetic timeout" instead of needing to manually schedule/unschedule a timer (and worrying about edge cases).
@ireneista I was thinking "async means I can keep the un-consumed bytes in a local variable instead of needing to use an instance variable" though I guess un-reading may be easier.
But if you can only un-read a single byte (ungetc doesn't guarantee any more than that) that also means you have to read a byte at a time, which seems inefficient?
I think what I really want is a way to read with MSG_PEEK (does that work on pipes?) and then tell the kernel how many bytes were consumed. That'd allow tricks like parsing a HTTP CONNECT, only consuming the "header" bytes, and passing the socket to another process, without necessitating reading 2-4 bytes at a time to make sure you don't overshoot the end-of-header.
@snowfox oh, yes you totally can use locals for that, we were just trying to avoid it for reasons that probably don't apply to you
don't worry about ungetc unless you're working in C, it's not a kernel facility, libc just has a byte of RAM in your address space that it manages, you can do the same thing yourself
unsure about MSG_PEEK. sounds slick if it works.
@ireneista That was also true for a while in the 32-bit days! Hard disks were below 2 GB, and on some platforms max file size was 2 GB, and even after that wasn't true, surely a *text* file will never have a reason to be over 2 GB, right?
I'm not sure if any text editors did "just mmap() the whole file" though (I assume classic Mac OS can't support it at all if you don't enable virtual memory; I'm not sure about Windows 98).
@ireneista My first compiler had a bunch of different memory models you could choose because the target was 8086/286...
...having 16-bit code and data pointers residing in separate segments had its uses
@ireneista how will you represent files in-memory? I think strings usually assume some valid encoding, and text files are crazy.
(In a former life I was the guy ppl came to with “we don’t know how to read this file, pls fix” and I’d find out parts of it were in some old Russian encoding and convert the lot to utf8)
@rudi oh, bytes, as far as that goes. we've been writing our own encoding-handling code because we care about not losing valid bytes during error recovery, and passing through invalid bytes unmodified, and stuff like that.
but that's not even the hard part, the hard part is that it's an editor and vectors are overly confining for that use.
@ireneista Arrrghh.
I once had to work on a system which thought it could keep byte offsets to each line. It got into something of a mess with things like invalid character encodings. Where the line couldn't be parsed from bytes to characters, but you still needed to know where the end of line was so that you could process the *next* line, which *probably* wasn't mangled in the same way.
@ireneista 😀 👍
I think the problem we had might have been to do with incompatible error handing between two of the libraries we were using - the input was supposed to be CSV which was another layer of libraries and complications (we weren't going to attempt to write our own bytes -> characters parsing).
@TimWardCam ah yep we can definitely see how that would make it significantly harder
we are doing our own bytes-to-characters stuff; we kind of feel like it wouldn't be a robust editor otherwise. that's more work, of course, but at least we get to be precise about how the weird cases are handled.