Anyone know of a paper that shows f-I/I-V curves and EPSPs/Spikes for *synaptic* input, e.g. via presynaptic axonal stimulation (or glutamate uncaging, in which case EPSP/Spikes as a function of laser intensity)?
Anyone know of a paper that shows f-I/I-V curves and EPSPs/Spikes for *synaptic* input, e.g. via presynaptic axonal stimulation (or glutamate uncaging, in which case EPSP/Spikes as a function of laser intensity)?
It will take a while for the preprint to come out, so here's a video of my work with Li Azran at the @Segev_Lab about the Calcitron, a simple linear neuron model that can implement many learning rules using Calcium-based plasticity.
This is so interesting:
https://www.cell.com/cell-reports/fulltext/S2211-1247(21)01802-7
Synaptically connected neurons send proteins to each other packaged in exosomes. Previous studies showed protein transfer but I think this is the first large-scale screen, identifying 200 proteins (likely an underestimate because they only looked at ones passing through axons).
More evidence that our conventional models of neural signaling are radically incomplete!
@PhiloNeuroScie @DrYohanJohn @vineettiruvadi
It depends what you mean by 'physical'. If 'physical' just means 'things that physicists can describe using equations' then it's sort of tautological that anything you have an equation for is 'physical'. But I don't think physicists would say that e.g. information (or energy, or heat) is 'material'. I think physicists would actually lean toward the other direction, that fields and energy are more fundamental than matter.
@PhiloNeuroScie @DrYohanJohn @vineettiruvadi
That's legit, but it may then be that the HPOC can't be solved using the tools of science. I actually am not so cynical though, I am open to the possibility that future technologies or clever experiments may enable us to get at the substrate of consciousness, but it won't look like the brain, in will look like...some new kind of field, or particle, or dimension of the universe, etc. (And I would still consider that to be dualist theory.)
@PhiloNeuroScie @DrYohanJohn @vineettiruvadi
Also there are Scott Aaronson's critiques where if you accept the IIT definition you basically have to say that certain random networks are conscious.
@PhiloNeuroScie @DrYohanJohn @vineettiruvadi
To what quantitative science are you referring? If you're talking about e.g. IIT, I think it's philosophically very weak for many reasons. I understand that they can calculate some metric based on brain activity and correlate that with wakefulness or being in a coma or whatever, but I don't think that solves any of the philosophical issues. If you dig deeply, IIT actually makes some philosophical assumptions that are just as 'out there' as dualism (like how it understands information).
@DrYohanJohn @PhiloNeuroScie @vineettiruvadi
I think it's both. Our knowledge of our own consciousness is epistemically not empirical, in the sense that it does not come via our sensory organs but rather via introspection. Moreover it doesn't seem to belong to the ontological category of materials, in the sense that you can poke and prod a brain all you want but all you'll see are action potentials - not the actual qualia themselves. But it could be some other sort of invisible 'matter', I'm open to that possibility. Consciousness certainly doesn't *seem* to belong to the category of materials, just like, I dunno, vector spaces also don't belong to the category of matter.