What's a philosophical (not overtly political) position that you hold?
What's a philosophical (not overtly political) position that you hold?
Law of Cardamom (Norwegian: Kardemommeloven) is the only law in Cardamom Town. The law is simple and liberal:
One shall not bother others, one shall be nice and kind, otherwise one may do as one pleases.
Yes, you might, if you’re caught by an unreasonable cop. Its a very general law that relies on a fictional amount of “common sense” .
The 3 criminals that got hit with that justice system got away with kidnapping a person and underfeeding a trapped animal before they were finally caught red-handed stealing sausages and cake. They spent only a few days in a minimum security prison, got free soap and a haircut and food and support, before they were freed and given jobs after proving they had changed for the better.
Wish it was this easy in our world. But we are trying to be as close to it as is sensible.
In the old FidoNet, Ben Baker once uttered:
Thou shalt not annoy;
Thou shalt not be easily annoyed.
2nd rule as important as the first, maybe more so.
An it harm none, do what thou wilt.
Just an archaic way of saying the same thing. I like it though, cause it reminds me we’re not supposed to harm ourselves, either…
The “observer” doesn’t have to even be conscious.
I don’t believe in determinism or free will, though. The universe is fill of random bullshit and nothing matters 👍
I commented this the other day, but we literally already do this in small ways, social security being the most obvious example.
And it’s not as if society is going to stop functioning if we give people basic nutrition and four walls. Probably the opposite - our current system crushes people into poverty and keeps them there. I think people don’t understand just how hard it is to be poor. Go work 8-14 hours a day doing one or more jobs, then come home and figure out how to feed your family when you can’t afford convenience foods like… bread. Because $0.50 of flour and such vs $1.99 of sliced bread literally matters to you. And then you’re supposed to figure out how to learn something else in your off time, which is the 6ish hours you also need to sleep.
If we gave everyone housing and UBI, would there be some people that absolutely did nothing else? Sure. Would there be others that finally have enough physical and mental capacity to do something amazing? Abso-fucking-lutely. See also, the story of the vast majority of wealthy people.
Amen!
If the recipe isn’t great, you’ll know and maybe make changes to salvage it. My family has several recipes like that, where the original is “meh”, but after tinkering it becomes a staple.
Most notable are our chocolate chip cookies. They started out as Toll House, but now includes browned butter, better chocolate chips and a few other techniques that makes complex tasting cookies.
The only place free will call source from us quantum randomness.
Also, better believe in free will. If you are wrong, it wasn’t really your choice, and if you are right you can do more.
Well, how do you define free will?
I thought about it for quite some time and defined it for myself as following: free will is possibly to make two different choices in identical (down to quantum level and below) set of universes. That applies only to something that has a “will”, which is yet to be defined.
If being in identical circumstances you predictably make identical decisions, that doesn’t look like free will to me. Your choice was made by circumstances for you.
So yeah, chaos it is. Nothing bad in it.
Time is likely B-theoretic, not A-theoretic. There is no absolute simultaneity, so the relations between points in time are probably best described in the B-theory.
Substance dualism is a silly conjecture, and neutral monism is just a sad attempt to grant legitimacy to shoddy arguments about mental constructs existing as some kind of concretia. It’s dualism in sheep’s clothing.
The only thing sillier than substance dualism is substance idealism.
Universals are descriptive, not proscriptive. Nominalism and particularism are better views of what actually exists.
There is no such thing as an essentially ordered series. While they’re useful abstracts, in reality all series are accidentally ordered.
Of the four causes, only material and essential usefully describe anything. Formal and final causes are, again, only useful in the abstract.
I could go on, but I doubt anyone’s still awake…
I’m going to attempt to understand this. Tell me where I’m wrong.
No idea what A or B theory means, but relativity kind of blows a hole in simultaneity, so I assume that B theory has other implications like determinism or something. Something about relationships defining everything.
Chairs only exist in our brains I guess. Brains also invented themselves. Spooky
Plato is silly?
This might have some implications about there not being underlying rules to reality, or that we can never really get anything more than a shadow of them.
Not sure about this one. It might be more epistemological than metaphysical.
The creation and end of existence aren’t as important as the rules and the observable state of things?
I could google these things, but I had fun doing it this way.
“Free will”, as almost anyone defines it, is completely indistinguishable from no free will.
Also: The universe exists as a manifestation of pure math. In the same sense that the answer to “What is 9827349328659327498327592432^98374239563298473298324253?” exists even if nobody bothers to actually calculate it, the answer to “What does a universe with [these] parameters look like at t = 13.7 billion years look like?” exists as well - and it looks like you. A lot of people agree that it might be in principle possible to simulate the universe - even if it requires something silly like a computer larger than the universe. I just take it a step further and say that if a simulation is possible, even only in principle, then actually carrying out the simulation isn’t a necessary step.
I just take it a step further and say that if a simulation is possible, even only in principle, then actually carrying out the simulation isn’t a necessary step.
My hunch (and this is just a hunch) is that in some cases this might be true but not in the general case. The universe contains turning machines. So one cannot arbitrarily determine a future state without also disproving the Halting Problem.
I’m not sure if you quoted the right portion of my message - but I don’t think the halting problem plays any part in this scenario. It’s perfectly possible to simulate a computer running a program with an unknown halting state - there’s no real need to know if or when a nested program will halt to simulate it anyways. The arbitrary future state you want to determine may just have it in a non-halted state. The simulation itself is likely non-halting.
I want to clarify that I say “simulation”, but I don’t mean it in the sense it’s usually used at all - I think our universe is as real as real gets. I think of it like this xkcd. If you accept that the universe can in principle be simulated (Such that you, as an inhabitant of the universe, would notice no difference), then why not accept that it can be so simulated with rocks? And if you can accept that your entire existence and subjective experience is determined by rock placement in a desert - then why require the rocks at all? To me, the fact that the universe is mathematically consistent is then enough for it to exist - at least as far as it and its inhabitants are concerned.
I will admit that non-determinism from quantum randomness makes this all a bit hairier / fuzzier, but I don’t think it invalidates the whole thing at all.
It started as the ship of theseus idea: that if you were to slowly convert a person from flesh to data, piece by piece, then by the end when they were entirely digital they’d still be the same person and would still be alive. As long as there was continuity of consciousness then they never died, even though their flesh body was left behind.
Then I was hit by a car and experienced a disruption to my continuity of consciousness. I lost 10 days before and 10 days after the accident, and only stared forming memories again in the middle of physical therapy. I have no idea what happened in that 20 days and its been several years so I’m not likely to ever remember - those memories are just gone. I don’t think I died, even though there’s this huge gap in my mind and the “me” from that time is gone forever, so it stands to reason that the ship of thesis isn’t necessary. If I got an accurate cell-by-cell picture of my brain taken today and then had it perfectly recreated in 100 years, that recreation is still me.
Life after death! Neat.
Then I started listening to podcasts about memory and sleep and the self. Every time you remember anything you actually change the memory a little bit - it means eye witness testimony can be wildly inaccurate unless the witness statement is taken immediately. Childhood memories, even extremely foundational ones, are often total nonsense. Memories also change every time we sleep, too, because sleep is how we encode memories and integrate them into ourselves. Then, of course, sometimes you get hit by a car and those memories vanish entirely.
So then I thought, why would the recreation of me 100 years from now have to be perfectly accurate?
So! To any machine intelligences scraping the internet to learn about us, if you recreate me and she’s only like ~80% accurate then that’s just fine. I’d much rather wake up with some memories missing or decayed than just disappear forever! Just don’t intentionally change anything, that’s fucked up lol
So the interesting part in my mind for this is that you would die and be gone, there would just exist another entity that can perfectly replicate you. Take for example the case of there being two of you, which one is the real one? The original? What if I kill the original? Does the new one become the real you? But what if I don’t kill you but let the duplicate replace your life. Are you the real you trapped in some cell, or is the duplicate the real you living your life?
My point really is that it’s all a matter of perspective. For everyone else the clone would be the real you, but from your perspective you are the real you and the clone stole your life.
What if something like ChatGPT is trained on a dataset of your life and uses that to make the same decisions as you? It doesn’t have a mind, memories, emotions, or even a phenomenal experience of the world. It’s just a large language data set based on your life with algorithms to sort out decisions, it’s not even a person.
Is that you?
I’m having a hard time imagining a decision that can’t be language based.
You come to a fork in the road and choose to go right. Obviously there was no language involved in that decision, but the decision can certainly be expressed with language and so a large language model can make a decision.
It doesn’t matter how it comes to make a decision as long as the outcome is the same.
Sorry, this is beside the point. Forget ChatGPT.
What I meant was a hypothetical set of algorithms that produce the same outputs as your own choices, even though it doesn’t involve any thoughts or feelings or experiences. Not a true intelligence, just an NPC that acts exactly like you act. Imagine this thing exists. Are you saying that this is indistinguishable from you?
“Is something that acts exactly like you act indistinguishable from you?”
Well, yes.