@screwlisp @kentpitman @cdegroot @ramin_hal9001 @dougmerritt
5? maybe for mark&sweep
but I can't see how more than 2 would ever be necessary for a copying GC. Once you have enough space to copy everything *to* (on the off-chance that absolutely everything actually *needs* to be copied), you're basically done...
... and if you're following the usual pattern where 90% of what you create becomes garbage almost immediately, you can get by with far less.
@wrog Haskell was first invented in 1990 or 91ish, and at that time they had already started to ask questions like, “what if we just ban set! entirely,” abolish mutable variables, make everything lazily evaluated by default. If you have been programming in C/C++ for a while, that abolishing mutable variables would lead to a performance increase seems very counter-intuitive.
But for all the reasons you mentioned about not forcing a search for updated pointers in old-generation GC heaps, and also the fact that this forces the programmer to write their source code such that it is essentially already in the Static-Single-Assignment (SSA) form, which is nowadays an optimization pass that most compilers do prior to register allocation, this allowed for more aggressive optimization to be used and results in more efficient code.
@ramin_hal9001 @screwlisp @wrog @dougmerritt @cdegroot
The LispM did a nice thing (at some tremendous cost in hardware, I guess, but useful in the early days) by having various kinds of forwarding pointers for this. At least you knew you were going to incur overhead, though, and pricing it properly at least said there was a premium for not side-effecting and tended to cause people to not do it. And the copying GC could fix the problem eventually, so you didn't pay the price forever, though you did pay for having such specific hardware or for cycles in systems trying to emulate that which couldn't hide the overhead cost. I tend to prefer the pricing model over the prohibition model, but I see both sides of that.
If my memory is correct (so yduJ or wrog please fix me if I goof this): MOO, as a language, is in an interesting space in that actual objects are mutable but list structure is not. This observes that it's very unlikely that you allocated an actual object (what CL would call standard class, but the uses are different in MOO because all of those objects are persistent and less likely to be allocated casually, so less likely to be garbage the GC would want to be involved in anyway).
I always say "good" or "bad" is true in a context. It's not true that side effect is good or bad in the abstract, it's a property of how it engages the ecology of other operations and processes.
And, Ramin, the abolishing of mutable variables has other intangible expressional costs, so it's not a simple no-brainer. But yes, if people are locked into a mindset that says such changes couldn't improve performance, they'd be surprised. Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".
To make a really crude analogy, one has free speech in a society not to say the ordinary things one needs to say. Those things are favored speech regardless because people want a society where they can do ordinary things. Free speech is everything about preserving the right to say things that are not popular. So it is not accidental that there are controversies about it. But it's still nice to have it in those situations where you're outside of norms for reasonable reasons. :)
@kentpitman
> Ultimately, I prefer to design languages around how people want to express things, and I like occasionally doing mutation even if it's not common, so I like languages that allow it and don't mind if there's a bit of a penalty for it or if one says "don't do this a lot because it's not aesthetic or not efficient or whatever".
Me too -- although I remain open to possibilities. Usually such want me to switch paradigms, though, not just add to my toolbox.
“the abolishing of mutable variables has other intangible expressional costs, so it’s not a simple no-brainer.”
@kentpitman I prefer the term “constraint” to “expressional cost,” because constraints are the difference between a haiku and a long-form essay. For example, I am very curious what the code for the machine learning algorithm that trains an LLM would look like expressed as an APL program. I don’t know, but I get the sense it would be a very beautiful two or three lines of code, as opposed to the same algorithm expressed in C++ which would probably be a hundred or a thousand lines of code.
Not that I disagree with you, on the contrary, that is why I was convinced to switch to Scheme as a more expressive language than Haskell. I like the idea of starting with Scheme as the untyped lambda calculus, and then using it to define more rigorous forms of expression, working your way up to languages like ML or Haskell, as macro systems of Scheme.
I'm not 100% positive I understand your use of constraint here, but I think it is more substantive than that. If you want to use the metaphor you've chosen, a haiku reaches close to theoretical minimum of what can be compressed into a statement, while a long-form essay does not. This metaphor is not perfect, though, and will lead astray if looked at too closely, causing an excess focus on differential size, which is not actually the key issue to me.
I won't do it here, but as I've alluded to more than once I think on the LispyGopher show, I believe that it is possible to rigorously assign cost to the loss of expression between languages.
That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.
Put another way, I no longer believe in Turing Equivalence as a practical truth, even if it has theoretical basis.
And I am pretty sure the substantive loss can be expressed rigorously, if someone cared to do it, but because I'm not a formalist, I'm lazy about sketching how to do that in writing, though I think I did so verbally in one of those episodes.
It's in my queue to write about. For now I'll just rest on bold claims. :) Hey, it got Fermat quite a ways, right?
But also, I had a conversation with ChatGPT recently where I convinced it of my position and it says I should write it up... for whatever that's worth. :)
@kentpitman
> That is, that a transformation of expressional form is not, claims of Turing equivalence notwithstanding, cost-free both in terms of efficiency and in terms of expressional equivalence of the language. It has implications (positive or negative) any time you make such changes.
I hope everyone here is already clear that "expressiveness" is something that comes along on *top* of a language's Turing equivalence.
Indeed Turing Machines (and pure typed and untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness.
And for that matter, expressiveness can be on top of Turing incomplete languages. Like chess notation; people argue that the algebraic notation is more expressive than the old descriptive notation. (People used to argue in the other direction)
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
[..it's possible I'm missing the point, but I'm going to launch anyway...]
I believe trying to define/formalize "expressiveness" is roughly as doomed as trying to define/formalize "intelligence". w.r.t. the latter, there's been nearly a century of bashing on this since Church and Turing and we're still no further along than "we know it when we see it"
(and I STILL think that was Turing's intended point in proposing his Test, i.e., if you can fool a human into thinking it's intelligent, you're done; that this is the only real test we've ever had is a testament to how ill-defined the concept is...)
1/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
The point of Turing equivalence is that even though we have different forms for expressing algorithms and there are apparently vast differences in comprehensibility, they all inter-translate, so any differences in what can utltimately be achieved by the various forms of expression is an illusion. We have, thus far, only one notion of computability.
(which is not to say there can't be others out there, but nobody's found them yet)
2/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I believe expressiveness is a cognition issue, i.e., having to do with how the human brian works and how we learn. If you train yourself to recognize certain kinds of patterns, then certain kinds of problems become easier to solve.
... and right there I've just summarized every mathematics, science, and programming curriiculum on the planet.
What's "easy" depends on the patterns you've learned. The more patterns you know, the more problems you can solve. Every time you can express a set of patterns as sub-patterns of one big super-pattern small enough to keep in your head, that's a win.
I'm not actually sure there's anything more to "intelligence" than this.
3/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
I still remember trying to teach my dad about recursion.
He was a research chemist. At some point he needed to do some hairy statistical computations that were a bit too much for the programmable calculators he had in his lab. Warner-Lambert research had just gotten some IBM mainframe -- this was early 1970s, and so he decided to learn FORTRAN -- and he became one of their local power-users.
Roughly in the same time-frame, 11-year-old me found a DEC-10 manual one of my brothers had brought home from college. It did languages.
Part 1 was FORTRAN.
Part 2 was Basic.
But it was last section of the book that was the acid trip.
Part 3 was about Algol.
4/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
This was post-Algol-68, but evidently the DEC folks were not happy with Algol-68 (I found out later *nobody* was happy with Algol-68), so ... various footnotes about where they deviated from the spec; not that I had any reason to care at that point.
I encountered the recursive definition of factorial and I was like,
"That can't possibly work."
(the FORTRAN and Basic manuals were super clear about how each subprogram has its dedicated storage; calling one while it was still active is every bit an error like dividing by zero. You're just doing it wrong...)
5/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
Then there was the section on call-by-name (the default parameter passing convention for Algol)
... including a half page on Jenson's Device, that, I should note, was presented COMPLETELY UN-IRONICALLY because this was still 1972,
as in, "Here's this neat trick that you'll want to know about."
And my reaction was, "WTFF, why???"
and also, "That can't possibly work, either."
Not having any actual computers to play with yet, that was that for a while.
Some years later, I got to college and had my first actual programming course...
6/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
... in Pascal.. And there I finally learned about and was able to get used to using recursion.
Although I'd say I didn't *really* get it until the following semester taking the assembler course and learning about *stacks*.
It was like recursion was sufficiently weird that I didn't really want to trust it until/unless I had a sense of what was actually happening under the hood,
And THEN it was cool.
7/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To the point where, the following summer as an intern, I was needing to write a tree walk, and I wrote it in FORTRAN — because that's what was available at AT&T Basking Ridge (long story) — using fake recursion (local vars get dimensions as arrays, every call/return becomes a computed goto, you get the idea…) because I wanted to see if this *could* actually be done in FORTRAN, and it could, and it worked, and there was much rejoicing; I think my supervisor (who, to be fair, was not really a programmer) blue-screened on that one.
And *then* I tried to explain it all to my dad...
8/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
You may say that untyped lambda calculus and SKI combinatory calculus and so on) are all *dreadful* in terms of expressiveness, and I will probably agree,
... but it also seems to me that Barendregt got pretty good at it.
I'm also guessing TECO wouldn't have existed without there being people who managed to wrap their brains around it and found it to be expressive and concise. I myself never got there (also never really tried TBH),
... but at the same time, it's *still* the case that if I need to write a one-liner to do something, chances are, I'll be doing it in Perl, and I've heard people complain about *that* language being essentially write-only line-noise.
10/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
To be sure, my Perl tends to be more structured.
On the other hand, I also hate Moose (Perl's attempt at CLOS) and have thus far succeeded in keeping that out of my life.
I also remember there being a time in my life when I could read and understand APL.
But if you do think it's possible to come up with some kind of useful formal definition/criterion for "expressiveness", go for it.
I'll believe it when I see it.
11/11
@dougmerritt @kentpitman @ramin_hal9001 @screwlisp @cdegroot
... and, crap, I messed up the threading (it seems 9 and 10 are siblings, so you'll miss 9 if you're reading from here. 9 is kind of the point. Go back to 8.)
(I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)
𝜔/11
@wrog
> (I hate this UI. If anybody's written an emacs fediverse-protocol thing for doing long threaded posts please point me to it, otherwise it looks like I'm going to have to write one ...)
There are *so* many programmers using variants of this UI that you would think someone would have addressed it by now.
But you never know, maybe not. Certainly everyone who does multi-posts seems to be struggling with doing it by hand, from my point of view, so that would seem to cry out for the need for some fancier textpost-splitting auto-sequence-number thingie, in emacs or command line or something.
Conceivably a web search would find the thing if it exists. I personally almost never do long posts, so I just grin and bear it when it comes up.
I think for protocol reasons it is necessary to try and connect up the thread using quote-posts, however any particular client understands that.
If you visit the topmost toot of the thread, you at least get the whole (cons) tree.
@dougmerritt @wrog @kentpitman @ramin_hal9001 @cdegroot
@screwlisp
Y'all are misunderstanding. Due to the error-prone nature of labelling a series of posts, from one way of viewing he skipped post 9, and 8 linked to 10.
Another view showed simply the correct sequence.
Regardless, anyone who has written e.g. "3/n" on a post is already implicitly indicating a desire for automation.
@screwlisp
Well there you go. So wrog just needs to find a list of such clients to choose the most suitable one -- if any.
@dougmerritt @screwlisp @kentpitman @ramin_hal9001 @cdegroot
what I *currently* do is compose inside Emacs (the *only* non-painful alternative for long posts),
then manually decide how I'm going to break it up -- which actually has some literary content to it, because in some cases, you *do* want to arrange the breaks for maximal dramatic effect
(generalized How to Use Paragraphs)
Problem 1 being that emacs doesn't count characters the same way as mastodon does, and I don't find out until I've cut&pasted part n, which doesn't happen until I've already posted parts 1..n−1
Problem 2 being having to cut&paste in the first place when I should just be able to hit SEND (which then has to be from within emacs).
@dougmerritt @screwlisp @kentpitman @ramin_hal9001 @cdegroot
given that I once-upon-a-time wrote a MAPI client for the sake of being able to post to Microsoft Exchange forums in rich text using courier font, in theory, I should be able to do this.
... but that would mean I'd have to Learn Fediverse. crap.
hmm. Anyone have experience with
https://codeberg.org/martianh/mastodon.el
i.e., is the best one or if this just Guy Who Grabbed the Name first and did the best SEO twigging? (I hate that google search has gotten so enshittified)
(also, thanks, LazyWeb!)
@screwlisp @mousebot @dougmerritt @kentpitman @ramin_hal9001 @cdegroot
yay, actual experience, actual review.
thanks.
@wrog @cdegroot @ramin_hal9001 @kentpitman @dougmerritt @screwlisp unforch mastodon.el hasn't yet implemented chaining of new toots. if someone wants to add it though, by all means. (the issue has been raised before, but as usual no one was willing to get their hands dirty.)
@screwlisp
Seems like the universe is calling on you to fix it!
With some apologies to legends:
(defun chained-toot
(lim str)
(let ((space (- lim 8))
(End (length str))
(span (+ 1 (ceiling (/ (length str) (- lim 8))))))
(cl-loop
for idx from 1 to span
for start from 0 by space
for end from space by space
for piece = (cl-subseq str start (min end End))
for addy = (format "%s\n%d/%d" piece idx span)
collect addy)))
@dougmerritt @mousebot @wrog @cdegroot @ramin_hal9001 @kentpitman
#elisp
@screwlisp
You forgot to change 'space' in a complex inscrutable way at each step.
@screwlisp @dougmerritt @mousebot @cdegroot @ramin_hal9001 @kentpitman
figuring out how to split up a toot is solving the wrong problem. In my cases I *know* how I want to split it up.
what I want is the ability to create a sequence of posts, edit them all in place, shuffle text around + attach media and polls wherever I want, get them all looking right,
and then send them all in one fell swoop.
I think the key concept is being able to compose a reply to a draft.
i.e., In-Reply-To is a buffer rather than a URL
Posting the reply automatically posts the In-Reply-To **first**. And likewise for longer chains.
Make that work in a reasonable way, and everything else follows.
(I'm up to 5000 chars in my draft reply on codeberg...)
@cy
Presumably you're joking. But different of us suffer different character limits. My server, Mathstodon.xyz, has a limit of 1729 characters -- but for most servers it's significantly less.
And some may be larger. Yours, perhaps. But that doesn't help others.
@screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @wrog
@cy @screwlisp @mousebot @cdegroot @ramin_hal9001 @kentpitman @dougmerritt
It's Twitter Culture. We're all supposed to speak in sound bites. Dorsey or whoever decided if you can't fit it in 130 chars, it's not worth saying. Then at some point they doubled it and thought that was generous enough.
And now short posts are what people expect.
LJ never had a limit.
Hell, **Usenet** never had a limit and we were suffering under far worse resource constraints back then.
I miss Usenet.
@wrog @cy @screwlisp @mousebot @cdegroot @ramin_hal9001 @dougmerritt
I did not like the Twitter extended from 140 to 280. But, unrelated to that, I'm pretty sure they made a decision that urls and @ references to people's handles should have fixed small cost, so as not to bias things in favor of short-named people or xrefs. I think that was very important. I was surprised that BlueSky did not copy it.
@kentpitman
Things have mutated so much over the years that messages like yours, that harken back to the original 140 limit that was due to the actual SMS protocol being used in cell phones, bring me back to reality with a palpable start.
I don't think it had anything to do with SMS. Twitter was an internet service from the start and Dorsey's decision was a matter of taste/branding/marketing; the notion of a service that *only* allowed short posts was Something New.
Receiving a twitter feed as SMS texts on a cell phone would have been insane (and probably also expensive back then).
@wrog
> I don't think it had anything to do with SMS.
But you would be wrong. Don't mess with the bull, you'll get the horns. I was not only there, I worked in that space at that time.
(I did more than languages, compilers, and operating systems because I got bored periodically. I've also done OCR algorithms, to name another thing that doesn't seem to fit with the rest.)
> The idea was initially pitched as an “SMS for the web”,...
> Why 140 characters? The limit was inspired by SMS text messaging, which capped messages at 160 characters. Twitter reserved 20 characters for the username, leaving 140 for the message itself.
https://blog.easybie.com/twitters-origin-story-how-140-later-280-characters-changed-global-discourse/
So it was at *least* inspired by SMS. But more than that, it gatewayed to and from SMS, so it retained the SMS limit of necessity to continue gatewaying -- for a while.
https://en.wikipedia.org/wiki/X_(social_network)#Appearance_and_features
Wikipedia stops just short of having an adequate history by itself.
@dougmerritt @wrog @cy @screwlisp @mousebot @cdegroot @ramin_hal9001
My (possibly-triggering) poem What Love Endures (https://nhplace.com/kent/Writing/what-love-endures.html) was written originally written for a contest (which I did not win) that wanted short stories that were 150 characters or less. Quite a tall order. I reclassified it as poetry after-the-fact, though it's pretty difficult reading no matter the genre classification. I think the reason they wanted that length was to offer content to subscribers via SMS.
@kentpitman
Yikes. Possibly-triggering indeed.
(I'm not triggered personally, but still...)
@dougmerritt @wrog @cy @screwlisp @mousebot @cdegroot @ramin_hal9001
Yeah, maybe that's why I didn't win. I didn't think the story/poem was really that bad. It's a lot of information to pack into a short space, and SMS has no way to flag content warnings.
They also had a competition for stories of 150 words. I wrote an entry for that I thought was really cool for that, and of a different nature. It didn't win either, though I was proud of it and think it at least could reasonably have. I've never published that one, though one day I suppose I should. It's still looking for a proper forum. :)
@kentpitman
In high school English, we were required to write poetry, so I did a piece about sunshine and rainbows. The teacher took me aside and said, "look, you're trying too hard to be super positive, and the result is awful. Try again. This time, make it personally meaningful."
So I did, and being a troubled teenager, turned in a poem about flaming death or thereabouts. The teacher took me aside again, gave me an A on the assignment, and recommended I see a therapist.
:)
If it's not one thing, it's another.