Dear followers*,
Here's a post to get to know you better!

  • Which description fits you the best:
  • Would you post a photo about your research / job?
  • Glad to have you around! 😃​

    *yes, you can still fill the poll if you're not following 😉​

    Comp. neuroscientist
    21.8%
    Expe. neuro (rodent)
    13.6%
    Expe. neuro (human & other)
    27.3%
    Other scientist
    20%
    Not a scientist (that's OK :) )
    17.3%
    Poll ended at .
    Here’s what I’m working on at the moment: a 32 #Tetrodes “hyperdrive” for bilateral hippocampal recordings. It’s a prototype - I’ll let you know if it works...🤞​
    ⤵️ The drive has been wired… 125/128 channels working (for now)
    ⤵️​ my favourite view: nicely cut #Tetrodes! (Diameter may be 13 or 17 microns)

    ⤵️​ Giving the final touches to the #HybridDrive

    In more details:

  • Adding the cone
    (painted with conductive epoxy, connected to the drive’s ground, to (try and) shield it from electrical interference, but it’s the first time I do this so I don’t know if it will work. Once on the rat, it will have a cap as well, except when plugged into the recording system.)

  • #GoldPlating
    This drive has 2 sets of tetrodes: 13 microns Nichrome wire and 17 microns Platinum-Iridium wire (testing which is best). Due to different wire material and diameter, they get gold-plated slightly differently; see this for a good plating tutorial for 12-13 um Nichrome, generously shared by John Bladon: https://github.com/elduvelle/ephys_tutorials/blob/main/4_gold_plating_Nichrome_12um.md

  • View of the finished drive.
    I am also testing the effects of implanting the tetrodes directly in the brain during surgery, or keeping them retracted in the drive and only lower them at the end of the surgery. This is for a bilateral dorsal #Hippocampus implant so it has two groups of tetrodes. I personally do not understand the point of not implanting directly in brain…

  • The guide/outer cannulas and their #Tetrodes
    Each guide cannula holds 4 tetrodes which move together, excepted 1 cannula that holds a single tetrode - to be left in the corpus callosum and used for reference. If everything works as planned. I used 27 gauge (thin walled) inner cannulas to guide the tetrodes - the smaller diameter that you can see); these do not move, only the shuttles and the tetrodes move.

  • Next step is implantation surgery!

    ephys_tutorials/4_gold_plating_Nichrome_12um.md at main · elduvelle/ephys_tutorials

    Contribute to elduvelle/ephys_tutorials development by creating an account on GitHub.

    GitHub

    ⤵️​ We have signals! I think… I’ve never implanted that high in cortex. It’s definitely very silent for now. Does it look normal to you?
    The timescale is 2s; this shows the unfiltered and unreferenced signals. There are a few disconnected channels.

    After trying it out, I still don’t understand the point of implanting your #Tetrodes out of the brain and wait until the end of the surgery to lower them. I much prefer watching them go in the brain, so I know what’s happening! And it’s also much faster.

    ⤵️​ It looks like we have #Theta already! Probably volume conduction - cortex above hippocampus doesn’t generate theta, right?
    (This shows half of the signals)

    Also, thanks to my great PI, we solved a problem of silent/attenuated signals due to a bad drive -> headstage adaptor :)

    #EphysTip 1: if something feels fishy, it’s probably because it is!
    #EphysTip 2: always test all parts of your circuit with a signal generator

    ⤵️​ The #Ripples have arrived!!
    (check the purple channel on each panel - it’s the “scout”). I’m always so happy to see them 😁​
    ⤵️​ Well let me tell you: it is definitely working!! 😃​ Tetrode movements are precise and reactive, even going back up works pretty well!
    Here’s my attempt at getting most tetrodes close to the layer (ie with positive sharp-wave/ #Ripples):
    The last thing to check: tetrode stability!
    ⤵️​ I know that I haven’t provided much follow-up on our prototype: it has its ups and downs but on the plus side, here are some wirelessly-recorded #PlaceCells !!!

    @elduvelle 😍

    Weird question: do you know if there's any actual publication pointing out that place cells aren't all perfect gaussian fields? I'm realizing that this might just be a completely public "secret", in that anyone who looks at hippocampal data knows it and everyone who doesn't.... doesn't

    @dlevenstein yes, very good point! Goes together with another common misconception about #PlaceCells that they have to have 1 unique place field - no! This is only the case in very small environnements.
    Regarding non-Gaussian place fields:

    • O’Keefe & Burgess 1996 mention that place fields are often elongated
    • Mehta et al., 2000 show skewed / asymmetric place fields in 1D
    • Grieves et al., 2020 show place field elongation in 2D and 3D, also suggest that the source of the elongation could be geometry (along the boundaries of the environnement) or behaviour (also biased towards the boundaries).
    Geometric determinants of the place fields of hippocampal neurons - PubMed

    The human hippocampus has been implicated in memory, in particular episodic or declarative memory. In rats, hippocampal lesions cause selective spatial deficits, and hippocampal complex spike cells (place cells) exhibit spatially localized firing, suggesting a role in spatial memory, although broade …

    PubMed

    @dlevenstein (and @kevinbolding) Also just for experiencing the variety of place field shapes people can check the supplementary data of our “4-room” experiment biorxiv version:
    https://www.biorxiv.org/content/biorxiv/early/2020/10/21/2020.10.20.346130/DC2/embed/media-2.pdf? (That link will download the pdf, this one is the page linking to sup data: https://www.biorxiv.org/content/10.1101/2020.10.20.346130v1.supplementary-material)

    • see extract below

    @elduvelle @kevinbolding I’m curious: about how many would you say you see like the second from the left in the bottom row? 🤔

    https://twitter.com/dlevenstein/status/1644722955929223169?s=46&t=zLACKZjahUD8huiRa33NWg

    Dan Levenstein on Twitter

    “@DrYohanJohn For example, that cell in the second column, bottom row in El‘s toot🐘💨. It’s an L shape along the left and bottom wall. Would you see stuff like that with grids? Not sure “distortion” is the word I’d use.”

    Twitter
    @dlevenstein @kevinbolding if you mean like this one below, L-shaped: it happens! Easy to explain with a few BVCs inputs. It could also be 2 fields (one elongated vertically, one at the bottom) that seem to merge into one.
    Also note that these examples in the rectangle have very bad sampling - not the best to make strong conclusions!
    Edit: typo

    @dlevenstein @kevinbolding

    Here’s a nicer example with more simultaneously- recorded #PlaceCells in a relatively large rectangular arena.
    Each small inset is the rate map of a place cell, ranked by spatial information I think.

    Simultaneously recorded is really cool because it shows that you can get such a wide diversity of fields with exactly the same behaviour!

    recorded by Roddy Grieves and shared with his approval

    @elduvelle @kevinbolding I will mention that you don’t need BVC inputs to explain the emergence of multi-modal and wall-hugging fields. I’m sure I have some L-shaped ones around here somewhere too. 😜😁
    @dlevenstein @kevinbolding Hmmmmm … maybe…
    I will say that the BVC model, even though it has been a bit forgotten these days, is really good at explaining how place fields look like in many different environments, at least for the basic hippocampal “scaffolding” prior to task learning!
    I don’t think any other model out there surpasses it at this stage
    *evades quickly *
    @elduvelle One thing that comes to mind is that Mehta paper, but I don’t just mean slightly skewed in the direction of behavior. A huge range of variety in the shape of fields, that frequently plays with environment geometry in different ways (avoiding or along walls, around corners, etc)
    @dlevenstein @elduvelle Muller was adamantly anti smoothing so all of his papers have more realistic looking firing fields. Can’t think right now of them explicitly saying they aren’t Gaussian. But Rivard had fields along obstacles.

    @kevinbolding @dlevenstein @elduvelle

    Fields are definitely not Gaussian. I don't know of any place cell data papers that say they are Gaussian. Some of the early models showed Gaussian fields, but that was because they were concentrating on the issue of how to get place sensitivity at all without it being simple cue-responses. (Remember that was the first big issue with place fields - they weren't trivial sensory cells!)

    For one thing, they stretch backwards along the direction of travel. For another, they can slide against walls. Here are some early place fields from our lab in the classic "open field" (which is neither open nor a field 🙄​).

    [Edit: these are simultaneously recorded.]

    @adredish @kevinbolding @dlevenstein these are really nice! It will always fascinate me to see how the field shapes are influenced by the environnement shape.
    Even recent models still use a Gaussian to approximate place fields no?
    @adredish @kevinbolding @dlevenstein and we usually smooth the rate maps with a Gaussian filter.. probably making them look more Gaussian than they actually are!

    @elduvelle @kevinbolding @dlevenstein

    Don't smooth your display figures! (Ever!)

    Unless you are specifically making a smoothness model, you should always display your data directly. For 2D plots, that Gaussian filter can really mislead you. (For example, a single pixel noise burst looks like a beautiful Gaussian after smoothing...)

    Modeling hasn't really been based on Gaussians for a while, since it is now (since the 1990s) based on combinations of internal signals (spiking relative to other fields) and external signals (which often have Gaussian noise). But Gaussian noise in the external associated cues does not a Gaussian place field make.

    @adredish @elduvelle @kevinbolding @dlevenstein

    (Sorry slight rant) I think this statement about smoothing is overly restrictive. It's all about choices you make for a bias-variance tradeoff and how close that gets you to the true model. If you have strong priors about how smooth the data should be (given that people have studied place fields for a long time, I think we do have pretty good priors), then smoothing makes a lot of sense because you are reducing variance (although of course if you have infinite data or a large amount of data it doesn't matter too much). There are also ways to choose the amount of smoothing via the data. Any binning is a choice of smoothing anyways.

    I think the more nuanced take is make informed choices about the data and don't just do something without thinking about it. Look at the underlying data and check that your choices make sense.

    Summary: smoothing != bad

    @edeno @adredish @kevinbolding @dlevenstein
    I personally don’t really like smoothing and like @adredish I think we should always show the raw data… which is why I show spike plots in my papers… BUT I also think we should show the data that goes into the analysis, and if an analysis requires or works better on smoothed data (e.g. ratemap correlations, replay decoding) then we should show the smoothed data.
    Your analysis should of course take the drawbacks of smoothing into account, for example if smoothing 1 pixel can create a place field, increase the size of your place field definition so it doesn’t include fake place fields. And smooth in the same way the test and control groups.
    @elduvelle how do you cut them? scissors or scalpel blade?
    @chrisXrodgers Serrated scissors! I would never dare cut them with a scalpel blade - that would probably crush them?! no?
    @elduvelle I guess serrated makes sense, it will catch them in the V of the serration, instead of crushing them like a flat scissors or scalpel blade
    @chrisXrodgers Yes - that's the idea! But not everyone likes them, some prefer non-serrated scissors. At the moment I use serrated for bundles between 1-4 tetrodes and flat for larger bundles. Here's an example of a bundle of 8 #Tetrodes :