{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "id": "bg_31cb8b1050c4",
  "canonicalUrl": "https://pseedr.com/platforms/biological-computing-underhang-why-anns-may-inherently-surpass-the-human-cortex",
  "alternateFormats": {
    "markdown": "https://pseedr.com/platforms/biological-computing-underhang-why-anns-may-inherently-surpass-the-human-cortex.md",
    "json": "https://pseedr.com/platforms/biological-computing-underhang-why-anns-may-inherently-surpass-the-human-cortex.json"
  },
  "title": "Biological Computing Underhang: Why ANNs May Inherently Surpass the Human Cortex",
  "subtitle": "Coverage of lessw-blog",
  "category": "platforms",
  "datePublished": "2026-04-11T00:05:47.999Z",
  "dateModified": "2026-04-11T00:05:47.999Z",
  "author": "PSEEDR Editorial",
  "tags": [
    "Artificial Intelligence",
    "Neuroscience",
    "Machine Learning",
    "Cognitive Computing",
    "Foundation Models"
  ],
  "wordCount": 475,
  "sourceUrls": [
    "https://www.lesswrong.com/posts/YLuifkTPR7TPLqoat/biological-computing-underhang"
  ],
  "contentHtml": "\n<p class=\"mb-6 font-serif text-lg leading-relaxed\">In a recent post, lessw-blog explores the computational limits of the human brain compared to artificial neural networks, suggesting that AI architectures possess a fundamental depth and speed advantage over biological intelligence.</p>\n<p><strong>The Hook</strong></p><p>In a recent post, lessw-blog discusses a fascinating theoretical framework termed the Biological Computing Underhang. This piece provides a rigorous comparison between the computational capabilities of the human cortex and modern artificial neural networks (ANNs), ultimately suggesting that artificial systems may possess inherent structural advantages over biological intelligence.</p><p><strong>The Context</strong></p><p>The intersection of neuroscience and artificial intelligence has long been a fertile ground for theoretical breakthroughs. Historically, AI development has drawn heavy inspiration from biological brains, from the basic structure of perceptrons to the complex architectures of deep learning. However, as large language models (LLMs) and foundation models scale to unprecedented sizes, a critical question emerges: are we simply replicating the brain, or are we building systems that bypass its fundamental physical limitations? Understanding the constraints of biological cognition-such as processing speed, layer depth, and synaptic plasticity-is essential for evaluating the long-term trajectory of machine learning. If biological brains operate under hard physical constraints that silicon-based systems can easily exceed, it fundamentally shifts how researchers should benchmark artificial intelligence and design future architectures.</p><p><strong>The Gist</strong></p><p>lessw-blog's analysis centers on the argument that the human cortex is inherently depth-limited. Specifically, its ability to represent complex abstractions in a single forward pass is constrained by biological realities, establishing a hard ceiling on the types of reasoning that are biologically possible. The author estimates that a single cortical area can be simulated by approximately 14 ReLU transforms, operating at roughly 4 milliseconds per pass.</p><p>To put this into perspective, the post contrasts these biological metrics with modern AI architectures. GPT-3, for instance, features 192 layers and can execute a forward pass in less than 2 milliseconds. This represents a massive speed and depth advantage for artificial systems. The author posits that with sufficient training data, a model matching GPT-3's depth could theoretically be trained to replicate every cortical microcircuit. Consequently, even if a human brain were given arbitrary developmental time, it should be computationally dominated by larger ANNs, assuming identical inter-region scaffolding and reward signals.</p><p>Furthermore, the publication highlights unique biological limitations that do not affect artificial models. For example, certain cognitive abilities, such as acquiring perfect pitch, are restricted to critical learning periods in humans. This is due to the physical encagement of synapses-often related to structures like perineuronal nets-during brain maturation. ANNs, free from such biological aging processes, do not suffer from these permanent physical lock-ins.</p><p><strong>Conclusion</strong></p><p>This analysis is highly significant for the AI and machine learning communities. It implies that a biological underhang exists, where human cognitive limits prevent certain types of learning that machines might easily master. By recognizing these biological limitations, AI researchers can push for novel architectures that exploit these theoretical advantages rather than merely mimicking human neural pathways. To explore the intricate details of corticothalamic cycles, microcircuit estimations, and the broader implications for AGI, <a href=\"https://www.lesswrong.com/posts/YLuifkTPR7TPLqoat/biological-computing-underhang\">Read the full post</a>.</p>\n\n<h3 class=\"text-xl font-bold mt-8 mb-4\">Key Takeaways</h3>\n<ul class=\"list-disc pl-6 space-y-2 text-gray-800\">\n<li>The human cortex is depth-limited, restricting its ability to process complex abstractions in a single forward pass compared to deep ANNs.</li><li>A single cortical area is estimated to be equivalent to about 14 ReLU transforms at 4ms per pass, whereas models like GPT-3 operate much faster with significantly more layers.</li><li>Biological brains face physical maturation constraints, such as synaptic encagement, which close critical learning periods-a limitation absent in artificial systems.</li><li>Larger ANNs theoretically dominate the computational capacity of the human cortex when provided with sufficient data and equivalent scaffolding.</li>\n</ul>\n\n<p class=\"mt-8 text-sm text-gray-600\">\n<a href=\"https://www.lesswrong.com/posts/YLuifkTPR7TPLqoat/biological-computing-underhang\" target=\"_blank\" rel=\"noopener\" class=\"text-blue-600 hover:underline\">Read the original post at lessw-blog</a>\n</p>\n"
}