From the Lua mailing list:

"Cult.Repo makes documentaries about Open Source Software.
They are currently making one about Lua.
They have interviewed the Lua team in Rio.

See the teaser at
https://www.youtube.com/watch?v=U0HjwLOJpNg

Now they need support to interview people from the game industry in the US.

If you can help or know someone who can, please get in touch with
Felipe Melo "

#lualang
Lua: Teaser

YouTube

Another #Lua observation: I had gotten in the habit of using the #Python #JSON library for storing and reading small amounts of data. I'm quite set on switching over to #LuaLang wherever I used Python before and it's so nice to just write my database files in the same language as the code itself.

I'm aware I could write my data as a Python script and import it, but this felt less natural than the analogous construction in Lua. This is probably because Lua is an extension of SOL, a language for describing data.

All the little conveniences like not having to put quotation marks around string keys and being able to assume an unassigned key is nil are really pleasing me, too.

#software #scripting #database

It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).

#programming #language #Mastodon

I am trying to subscribe to the Lua mailing list. However they host it on the dreaded google groups. Do google groups not allow non-goog email subscriptions any more?

"There was a problem delivering your message to
[email protected]. See the technical details below.
The response was:
Unable to subscribe to group"

#lua #lualang

@nazokiyoubinbou That's an interesting thought! Earlier when I crammed #LuaLang onto this machine, I used UPX to compress LUA.EXE, and while it did save space, the decompression time at startup got very tiresome... taking 10-15 seconds to launch the REPL instead of ~3 seconds. So I abandoned UPX for that project and just ate the disk space cost. With a CPU this slow, trading off CPU for disk space is not necessarily a win.

I wonder if the startup time penalty would be less painful for these smaller binaries, or if it would feel less onerous for a compiler than an interpreter.

psf (@[email protected])

Attached: 1 image After adding the missing size check, #LuaLang behavior is much more benign. A 64K segment size limit on table sizes isn't ideal, but it beats a hard crash, and it's a stable jumping-off point for further modifications. #retrocomputing #msdos #i8086 #v20

OldBytes Space - Mastodon

ДМК Пресс при поддержке МойОфис переиздали книгу Роберту Иерузалимски "Программирование на языке Lua".

https://dmkpress.com/content/authors/8668961/

#LuaLang

I don't have a strong opinion on if you should use braces:
fn my_func() { … }
or blocks:
fn myfunc() … end
for defining blocks in your programming language, but I have a very strong opinion that if you use braces for blocks then you should not use braces for anything else.

Why? Because block scope ought to be easy to differentiate from everything else. This is especially important with when passing anonymous functions around.

Gleam and Lua ar languages that get this right!

#programming #programminglanguages #gleam #gleamlang #lua #lualang

The best example of distilled software that comes to mind is Project #Oberon, which was distilled by Niklaus Wirth (and others) for most of Wirth's lifetime (if you count his earlier time working on Pascal and Modula as earlier steps of the distillation). https://projectoberon.net/

There are also #Forth and #Lisp, of course, but they've been distilled in many different directions by many people so there isn't a clear unifying idea. You have to get more specific. Now, Chuck Moore's evolution of Forth -> MachineForth -> ColorForth certainly counts as distillation.

#LuaLang also comes to mind. Porting the most modern Lua to the 188K TI-92+ calculator (last year) is what sold me on the idea that widely used modern software can remain useful on the oldest computers. That said, Lua is not entirely immune to bloat: I had to roll back from v5.4 to v5.2 to cut my memory usage from ~170K to ~128K 😉

Home

Project Oberon: The Design of an Operating System, a Compiler, and a Computer

While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.

(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)

One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.

On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.

Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!

The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.

I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.

#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.

I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.

This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.

#retrocomputing

Python Numbers Every Programmer Should Know

A cheat sheet of real-world timing and memory numbers to guide performance-sensitive decisions.

Michael Kennedy's Thoughts on Technology

By 100x speed difference, I mean the uu encoding/decoding rate is about 30 bytes per second. I'm not accustomed to a correct program being this catastrophically slow ;)

Not throwing shade at #LuaLang for the 100x speed difference: it's astonishing that a modern interpreter can be built for a 4.77 MHz 8088 and run at usable, if lukewarm, speeds. The 100x size difference comes down to the interpreter including Lua's full library, most of which isn't needed for all programs.

If I had to guess, I'd expect most of the time to be spent in string operations and syscalls. Lua translates file contents to (immutable) string when reading, so more conversions are necessary to perform transformations and output results. Moreover, when writing output the program does f:write() 3-4 bytes at a time: if this were unbuffered and translating directly to hundreds of write syscalls, that would also be very slow.

#retrocomputing