researching for an essay-story hybrid, i'm reading Free Will: A Very Short Introduction by Thomas Pink, and i came across this section last night that highlights one of the many issues i have with modern philosophy or perhaps just philosophy in general, that being the reliance on basically what i'll call "semantics" to justify whatever the hell you want, usually from a self-motivated or overly human-centric lens. i'll explain in a moment.
just before this, Pink grants that sharks "may hold beliefs and desires, and may perform goal-directed actions as we do," but then asks, "yet is a shark in control of its action as we are?" then follows it up with the aforepictured excerpt, claiming that sharks "don't reason as humans do," therefore they are not in control of their actions, therefore sharks, and other non-human animals, are not "free agents." but i think he's using "reason" instead of "intelligence" here, on purpose.
and he’s doing that because he doesn’t want to grapple with the big problems that the concept of "intelligence" brings into the free will debate. for example, a shark will hunt for food, but it will not hunt a mature whale for food; however, sharks will hunt young, sick, or injured whales, which illustrates a level of (his word) “reasoning” behind their actions. maybe he would argue this is biological instinct?
but the way i see it is, the difference between a shark and human, outside of anatomy and all that, is that sharks just aren’t very smart. unfortunately for Mr. Pink, however, i know a lot of people who aren’t very smart. are those people not “free agents?”
my problem is that Pink here is basically saying, “sharks aren’t very smart, therefore they have no free will,” but every living thing has intelligence, albeit to varying degrees, so intelligence is on a scale, and if on a scale, then every living thing must have some capacity to be a “free agent,” even if it looks more like instinct to an observer. it’s arrogant to assume that, because humans are the most intelligent on Earth, that this disqualifies non-human animals from being free agents.
just imagine, for a moment, a super “intelligent” alien species from another planet, would they think that humans are “free agents,” considering their alien intelligence is so vastly superior to our own? they may look at us like, “these guys are still killing each other for oil in the desert, they are no different than bears fighting over the biggest cave,” or something like that.
i will grant that, the more "intelligence" a creature has, the more decisions it can consider: should i do this that way, or this way, or some other way? etc. but how many branching decision pathways must a creature be able to conjure up in its synapses before it's considered a “free agent?” what is the cutoff point? are three-year-old humans “free agents?” what about twelve-year-olds? what about my pretty much braindead cousin, Jake?
my “semantics” point earlier is more so about how philosophers use a bunch of words that, when you think about them, just point back to the same thing, like “reason” points back to “intelligence,” and how these sorts of semantics can, and will, justify the infliction of terrible suffering; see McDonald’s, “millions served.”

all that being said, i’m sure there’s more thinking i could do on this, and i don’t think i’m articulating my points very well. but now, i’m at the point where i think one of the following scenarios about free will must be true: one, every living thing has free will, or two, nothing has free will.

regardless of whichever scenario is correct, it’ll make no difference in regards to how we live our everyday lives, which is a whole nother problem i have with philosophy of this type.