scaling
How plausible are the ~2030 timelines, really? There seems to be so much false hype, and it doesn’t feel realistic either. It’s hard to believe the narrative that I just happen to live at the inflection of human history.
- What’re some instances of this kind of hype in history? How’d those turn out?
- Self-driving was infamously overhyped.
- Introductory theory
- to read: eliezer’s new book seems too sci-fi. Better to focus on more immediate policies?
- Similarly with ai-2027
- scaling era by dwarkesh
What is worth working on? If AGI arrives, should we be mainly sketching out ideas? Ancestors 100 years ago would scoff at jobs today, saying, “oh that podcaster sure is making lots of rice.” I’m excited to see what jobs are to come.
Should the rest of us really worry about getting thoughts into the corpus?
Is ASI actually possible? If AGI is and computing scales, it’s possible, but we only really have proof of human-level intelligence at all. What if we don’t see aliens because somehow brains are the great filter?
Extrapolating the spectacular performance of GPT-3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters.
— Geoff Hinton
Sure..
vvvvvv
meltingasphalt.com/neurons-gone-wild
^^^^^ THIS IS SO COOL Every cell, including neurons, once had a drive to survive –agency. Out of this agency, as the essay reveals, consciousness emerges naturally. LLM params don’t seem very agentic. What if they were?
Is consciousness binary, or continuous? i.e. does there exist a something s.t. frogs : humans :: humans : something, or is there just a strict cutoff and everything past that is just stronger compute? - If the former, we’re totally missing the point of it all, aren’t we.. Can we work towards that something?
- Certainly (?) we’re more conscious than even our very early forefathers (pre-homo). Is there a functional difference compared to them, or do we just have more knowledge?
What is intelligence?
Is there such thing as innate fluid intelligence? Or is it, as Gwern put it, just a huge ensemble of specialized Turing machines?
It’s really difficult sometimes to stop replaying tunes in my head on loop. Can I get my brain to replay the words of Sanderson or Simler or Sapiens or Sagan and become fluent?
How difficult is writing? Compared to, say, solving competitive programming problems? The latter certainly feels more taxing in the moment. Maybe that’s because algorithm is usually entirely contained in working memory (much easier to visualize via thought) whereas writing lets you, well, write it down.
What’s happening during the problem-solving process, if thoughts simply appear in consciousness? Is ability simply glorified pattern-recognition? Is creativity a distinct skill, then?
Why is it?
Why did humans evolve such powerful thought machines? Did our niche (feeding bone marrow) somehow create the need?
Gwern says evolution encoded turing machines in our brains that added up over time — that other animals in their own niche simply didn’t have the need beyond hyper-specialized intelligence.
Bone marrow indeed seems to be exclusive to Homo.