Popular Christianity is heavily based on paganism, which is incredibly ironic considering that paganism is generally posed as the antithesis of Christianity. The story of Lucifer is syncretized with the story of Prometheus, although Lucifer doesn’t really benefit humanity at all. According to the popular interpretation, Lucifer is the origin of all evil, became a snake in the garden of Eden, and then tempted Eve to eat the fruit of the tree of the knowledge of good and evil. However, the snake isn’t actually connected to Lucifer in the text—that interpretation was added later to explain the problem of evil (why it exists if God is supposedly good)
The idea that Lucifer is insubordinate and violated the natural hierarchy is very old, but the idea that Lucifer is the origin of evil is relatively new.
Christian theology contains many holes like this because there’s a tendency towards treating every word in the Bible as literal, where it may have been written allegorically or as a parable, as Jesus often did. (Just to be clear, Jesus did NOT write the Bible, I’m just pointing out that the writers of the Bible may have tried to replicate his style.) This issue is compounded when you include the Old testament, as it contains portions which are clearly mythological, but are nonetheless treated as fact by certain modern Christians.
It’s funny everyone so far has called the character a fursona. Is the main purpose behind a fursona to try and be the fictional character? To hide yourself behind a constructed façade? To be swaddled in blankets of paracosm and derealization?
These are only half-rhetorical questions. I don’t understand furries or fursonas.
Putting the power lines underneath the 68k is clever. I had never thought of doing that before.
As for EEPROM vs NVRAM, if you have an EPROM programmer, there isn’t any effort required to program the ROM, and NVRAM is just more expensive compared to ROM.
Also, what is your general plan for the design? Is it to have multiple CPUs running simultaneously, or will only one CPU execute code at a time? In addition, will they be sharing a bus, or doing some mailbox message passing?
What’s important to note is that there has been a big shift in the goals and techniques of education. This most famously occured with “common core” math in the US. It was a push to teach math in a more intuitive way, one that directly corresponds with what children already know. You can physically add things together by putting more of them together, and then counting them, so they try to teach addition with that analog in mind.
Prior to common core math, there was “new math,” which anyone under 80 years old assumes has always been the standard. New math was a push to teach math in a more understandable way, one that gradually introduced new concepts to ensure children understood how math works. This was satirized by Tom Leher in his song “New Math.” If you look up the song, you’ll see that new math mostly was implemented by teaching students how base-10 positional notation works, and then using that understanding to present addition and subtraction as logical algorithms.
Prior to new math, the focus of math education was much more about getting the right answer, rather than the skills needed for problem solving using math. This allows for a higher breadth of education, as topics can be covered quickly, but each topic is understood in a shallow way.
FP & OOP both have their use cases. Generally, I think people use OOP for stateful programming, and FP for stateless programming. Of course, OOP is excessive in a lot of cases, and so is FP.
OOP is more useful as an abstraction than a programming paradigm. Real, human, non-computer programming is object-oriented, and so people find it a natural way of organizing things. It makes more sense to say “for each dog, dog, dog.bark()” instead of “map( bark, dogs)”.
A good use case for OOP is machine learning. Despite the industry’s best effort to use functional programming for it, Object oriented just makes more sense. You want a set of parameters, unique to each function applied to the input. This allows you to use each function without referencing the parameters every single time. You can write “function(input)” instead of “function(input, parameters)”. Then, if you are using a clever library, it will use pointers to the parameters within the functions to update during the optimization step. It hides how the parameters influence the result, but machine learning is a black box anyway.
In my limited use of FP, I’ve found it useful for manipulating basic data structures in bulk. If I need to normalize a large number of arrays, it’s easy to go “map(normalize, arrays)” and call it a day. The FP specific functions such as scan and reduce are incredibly useful since OOP typically requires you to set up a loop and manually keep track of the intermediate results. I will admit though, that my only real use of FP is python list comprehension and APL, so take whatever I say about FP with a grain of salt.