Brain
Where the heck does this thing called experience come from??
^^ that’s the whole question I want to spend life getting to the bottom of
in what ways are we working towards it?
inspiring reads:
wow: https://drmichaellevin.org/
- CAVE (infra behind both flywire & MICrONS dataset!)
- what bottlenecks to scale to 10mm^3? some early ideas:
- rn queries to historical states replay all edits, in time, which lags users editing a heavily-modified neuron. A partially persistent DSU storing temporal edges as could make traversal with union-by-rank take at worst time (N=total neurons).
- Since in Discussions they mentioned supervoxel size is the primary limiter for scaling the dataset, we could have the ChunkedGraph keep a low-resolution node set for fast reading in BigTable (enough info to suspect errors) while offloading the raw high-res AI affinity maps to GCS. This way the engine can lazily fetch fine pixel data only when a specific bounding box is flagged for a split. Alternatively, we could also experiment with implicit neural representations for compression (train neural nets to map (x, y, z) to structural density).
- CAVE’s support of automated proofreading was really interesting (and important for scale). I wonder if instead of using external tools like Neurd that download data from ChunkedGraph API and compute their own skeleton, we could run a GNN/3D CNN directly on existing L2-Cache skeletons to detect anomalies quickly.
- POLI Fiber for MRI-compatible neural recording that unifies optical/electrical/chemical modalities. (wonder if autoencoders could be adapted to track and subtract that baseline capacitance drift to automate detection of quick dopamine oxidation peaks)
- piezo ultrasound for deep brain stimulus https://pmc.ncbi.nlm.nih.gov/articles/PMC11150473/
- Gut is central to nervous system, and thus awareness too!
- https://www.nature.com/articles/s41587-023-01833-5
- https://pubmed.ncbi.nlm.nih.gov/39394431/
multiagent models of mind on lw: https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip
they’re training models to reconstruct what mice see from brain activations https://elifesciences.org/articles/105081. it’s still only a first step tho. they only probed neurons in a tiny tiny patch (630 micrometer across). but even if we can measure more, maybe we’ll never reconstruct the actual image entirely since mice brains probably lose info when they turn light into experience. ideally we should be reconstructing subjective experience directly, not the external movie, but mice can’t tell us what they see, and no human’s going to gene edit their babies to have glowing neurons lol
omg this is amazing. 10 days. at home! https://brainhack.vercel.app/ae
alb is so cool. https://writetobrain.com/olfactory
^^^ hey i can acc do something like that this summer
deep brain stimulation noninvasively with interfering waves hmm
- https://www.nature.com/articles/s41593-023-01456-8
- https://www.science.org/content/article/colliding-currents-can-target-deep-brain-without-surgery
gallery of neuroimaging methods by lev: https://neuro.lev.la/