Projects
Sprouted from curiosities
1 Thinking machines for medicine 2 Physics 3 Informatics 4 Writing 5 Grass
Thinking machines for medicine
I’m currently learning to apply ML by building an RNA-ligand affinity prediction model and reading the relevant literature. May post updates as notable takeaways arise.
Update 1: YES! Few other tests still ongoing and I’ll add more details shortly but just wanted to get it out now: it actually outperforms DeepRSMA!! I still have a few things I want to try, and I’ll start consolidating what I’ve tried here in a post. Maybe even a paper to exercise Nanda’s Guidance. hah, can’t believe I actually get to do this. share some of my own work that worked.
Update 2: Code uploaded for use! at https://github.com/109105116/ribo-upload.
Update 3: I’m now analyzing takeaways from the process in a reflection post, and drafting a paper to share methods and results.
Update 4: I’ve decided to prioritize an explainer post for the model. Early reflections on the process can be found in a writeup for SPARC.
**
From summer:
I’m also exploring the application of mechinterp (borrowed from LLM research) to biological deep learning models at a local uni. Initial correlations of deep features to biological tissue properties (in a histological gene prediction model) show promise for understanding what these models look for in making predictions. Beyond pure interpretation, this might help with generating new biological hypotheses and biomarkers (exciting!). Next step: SAEs.
**
Data I’m training on:
- (for ml, working on projects and reading papers help more directly right now, but books are still fun)
- Karpathy’s amazing build-along series
- Gwern’s insightful essays
- Dwarkesh podcast
- Situational Awareness
- Understanding Deep Learning
- Neel Nanda on learning mechinterp
- Also:
Previously:
- Pytorch quickstart tutorial
- Colah’s LSTMs
- GNNs on Distill
- 3Blue1Brown/Welch Labs popsci videos
- Coursera’s ML specialization
- Fun projects like balancing a pendulum in HF’s Deep RL course
Physics
What’s it like to work with a really intricate model of the world in your head, with all the subtleties and exceptions? I’d love to know.
Latest
- What is light, exactly?
- Working through Physics, HRK Vol. 2 (Ch 30 as of 2025-07-29)
- Getting a firmer grasp on mechanics with Morin’s awesome book (Ch 5 as of 2025-08-15)
- Upd: Ok this site is amazing: book.bionumbers.org. It puts all the tangible goodness you get from physics into biology. omgomg
Notes (latest on top)
- Physics is ultimately a model we invented to represent the world we see. The postulates and many theorems are based on observation. I wish someone told me this coming in and didn’t spend so much time asking “ok, but what is charge??”
- It is fascinating, though, how reality fits so well with simple mathematical concepts like inverse square. Wigner puts this well.
- Finished: Physics, HRK Vol. 1. It’s hard to put to words the feeling when I saw E=mc^2, an equation which had eluded me all my life, appear unironically in a textbook following “now rearranging these terms, we get” after months of developing the basics. A real treat.
- I found physics textbooks far more engaging than math ones previously. Here, pleasure comes not just from aesthetic delight via (often contrived) problems, but it instead takes the student on a tour through some of the best ideas in history. Almost like reading a novel!
Informatics
Learning how to learn with competitive programming problems. I’ve found this genre very precious. Every problem requires coming up with new ideas. There is a “curriculum” of algorithms, but the techniques problemsetters test overwhelmingly must be intuited — the complete opposite of plug-and-chug methods often tested in school.
There’s nothing quite as mentally challenging as learning to invent on the fly.
Some takeaways for learning so far
- The training process is about coming across as many subtle ideas as possible and knowing when to apply them.
- Solutions usually make sense, or are even obvious, in retrospect; it’s coming up with them alone and fast that’s difficult. And so: much of learning is asking “how could I have thought of this? what was missing? what trigger to look for next time?”
- I like making anki cards for these takeaways. Intuition is built over enough samples.
- Part of this is letting go of ego instead of spending too much time on one problem
- “Writing is thinking” as they say, but I would’ve never thought it applied to olympiad problems. Works unreasonably well.
- Organizes ideas.
- Keeps focus and explore mode; prevents getting stuck repeating the same thoughts.
- Frees up mental RAM (diagrams on paper help sometimes too)
- Generates new ideas! Often, I’d list the peculiar problem constraints out in words (e.g. need to try all permutations, but too slow..), and that alone reminds me of a technique (..dp on subsets!).
- And lets you see when reviewing where each thought led and where your blind spots were.
- Still wondering: what’s the tradeoff between exposure to more problems vs remembering past techniques? Sometimes purely Ankifying solutions with a new elegant technique helps with pattern recognition in future problems, but it’s hard to tell how much exactly.
Writing
To think (and practice articulation and find people and create something outside myself and maybe even help people). Hopefully putting things up will help me finish off more pieces.
Grass
i’m proud of my tomatoes. :)
roi: 4 months of watering, 2 fights with local squirrels, and an endless reek of compost for exactly 8 tomatoes. totally worth it.
other goals to contact grass 2026 ed.
- balance 1 arm hs for 15s
- 4x4 170% bw pullups
- run marathon length start to end
- stage 5 of tmi?