0 Followers
0 Following
2 Posts
I’ve been working on developing a CPU architecture based around my own variant of lisp called “dollhouse lisp” the big twist is that DHlisp executes code by reducing a syntax tree, so all code is destroyed once it’s been executed. It’s a very elegant solution, but a very difficult implementation. (Especially when it comes to loops and garbage collection.)
According to Wikipedia, the bhagavad gita was written around 200 BC, whereas stoicism originated in 400 BC. Admittedly, this was just the result of some very cursory research, the Buddhist philosophy could go back further than the writing itself, but it seems to me like they independently arise around the same time (that being around 200 years difference, lol) but you really need to be careful saying stuff like that. I’ve made the same mistake dozens of times where I confidently state something, only for it to be disproven by a minute of googling.
The square root function only has a single answer for every x. This is intentional. The technical definition of function means there can only be a single y value for every x value. Of course, there are situations where you need to consider the positive and negative square root, and that’s why the quadratic equation has the ± symbol.
One thing we know about Jesus is that he was very good at using rhetorics. Other than the accounts in several books about him using rhetorical techniques very advanced for the day, there’s also evidence that he was skilled enough to start a religion. But any information finer than that is hard to prove. The books are over a thousand years old, written at different times by different people, followed by several translations, so we can’t know his exact word choice or style of speech with certainty. The closest to the ‘source’ are ancient Greek texts which were likely translated from some other language.

Popular Christianity is heavily based on paganism, which is incredibly ironic considering that paganism is generally posed as the antithesis of Christianity. The story of Lucifer is syncretized with the story of Prometheus, although Lucifer doesn’t really benefit humanity at all. According to the popular interpretation, Lucifer is the origin of all evil, became a snake in the garden of Eden, and then tempted Eve to eat the fruit of the tree of the knowledge of good and evil. However, the snake isn’t actually connected to Lucifer in the text—that interpretation was added later to explain the problem of evil (why it exists if God is supposedly good)

The idea that Lucifer is insubordinate and violated the natural hierarchy is very old, but the idea that Lucifer is the origin of evil is relatively new.

Christian theology contains many holes like this because there’s a tendency towards treating every word in the Bible as literal, where it may have been written allegorically or as a parable, as Jesus often did. (Just to be clear, Jesus did NOT write the Bible, I’m just pointing out that the writers of the Bible may have tried to replicate his style.) This issue is compounded when you include the Old testament, as it contains portions which are clearly mythological, but are nonetheless treated as fact by certain modern Christians.

It’s funny everyone so far has called the character a fursona. Is the main purpose behind a fursona to try and be the fictional character? To hide yourself behind a constructed façade? To be swaddled in blankets of paracosm and derealization?

These are only half-rhetorical questions. I don’t understand furries or fursonas.

Putting the power lines underneath the 68k is clever. I had never thought of doing that before.

As for EEPROM vs NVRAM, if you have an EPROM programmer, there isn’t any effort required to program the ROM, and NVRAM is just more expensive compared to ROM.

Also, what is your general plan for the design? Is it to have multiple CPUs running simultaneously, or will only one CPU execute code at a time? In addition, will they be sharing a bus, or doing some mailbox message passing?

What’s important to note is that there has been a big shift in the goals and techniques of education. This most famously occured with “common core” math in the US. It was a push to teach math in a more intuitive way, one that directly corresponds with what children already know. You can physically add things together by putting more of them together, and then counting them, so they try to teach addition with that analog in mind.

Prior to common core math, there was “new math,” which anyone under 80 years old assumes has always been the standard. New math was a push to teach math in a more understandable way, one that gradually introduced new concepts to ensure children understood how math works. This was satirized by Tom Leher in his song “New Math.” If you look up the song, you’ll see that new math mostly was implemented by teaching students how base-10 positional notation works, and then using that understanding to present addition and subtraction as logical algorithms.

Prior to new math, the focus of math education was much more about getting the right answer, rather than the skills needed for problem solving using math. This allows for a higher breadth of education, as topics can be covered quickly, but each topic is understood in a shallow way.

I once tried to make a ridiculous multi-processor computer, which took advantage of the TMS-9900’s weird clocking to allow it to run faster CPUs in between slower clock cycles. The 9900 has a four phase clock and a maximum speed of 3mhz. I wasn’t skilled enough to pull it off, but it’s still a really interesting idea.

FP & OOP both have their use cases. Generally, I think people use OOP for stateful programming, and FP for stateless programming. Of course, OOP is excessive in a lot of cases, and so is FP.

OOP is more useful as an abstraction than a programming paradigm. Real, human, non-computer programming is object-oriented, and so people find it a natural way of organizing things. It makes more sense to say “for each dog, dog, dog.bark()” instead of “map( bark, dogs)”.

A good use case for OOP is machine learning. Despite the industry’s best effort to use functional programming for it, Object oriented just makes more sense. You want a set of parameters, unique to each function applied to the input. This allows you to use each function without referencing the parameters every single time. You can write “function(input)” instead of “function(input, parameters)”. Then, if you are using a clever library, it will use pointers to the parameters within the functions to update during the optimization step. It hides how the parameters influence the result, but machine learning is a black box anyway.

In my limited use of FP, I’ve found it useful for manipulating basic data structures in bulk. If I need to normalize a large number of arrays, it’s easy to go “map(normalize, arrays)” and call it a day. The FP specific functions such as scan and reduce are incredibly useful since OOP typically requires you to set up a loop and manually keep track of the intermediate results. I will admit though, that my only real use of FP is python list comprehension and APL, so take whatever I say about FP with a grain of salt.