My post on AGI type approaches based on brain vs LLMs

 I asked this into Chat GPT (source chatgpt question)

"So when you look at current AI technology like LLM technology, it is currently, it is highly based on digital prompt inputs and analyzing large datasets with billions of parameters from text data.   The human brain does not necessarily operate that way.   What are current approaches to bridge the gap between low tech human brain processes to the machine learning algorithms with LLMs.  Projects like the numenta project and thousand brain project"

Here is the response from chatgpt - source chatgpt model as of Jun5.


"Recent AI research increasingly looks to the brain for inspiration. Neuroscience-derived ideas—such as sensorimotor learning, modular cortical columns, and memory consolidation—are being used to overcome limitations of today’s deep learning systems. For example, Numenta’s new Thousand Brains Project is an open-source initiative to build AI on cortical principles (sensorimotor input, grid-cell reference frames, parallel “columns”, and continual learning)numenta.comarxiv.org. Numenta describes its framework as “open-source sensorimotor learning… based on the principles of the neocortex”numenta.com. In this view, intelligent agents learn by moving and sensing in the world, building many partial object models in parallel and “voting” to form coherent perceptions"

"

Numenta’s NuPIC: Brain-Based LLM Inference

In parallel, Numenta applies its neuro-inspired algorithms to enhance large language model (LLM) deployment. Their product NuPIC (Numenta Platform for Intelligent Computing) is a commercial inference engine that “draws on Numenta’s decades of neuroscience research” to run LLMs efficiently on CPUsintel.comventurebeat.com. NuPIC uses novel data structures and memory-optimized routines derived from cortical theory. VentureBeat reports that NuPIC can improve efficiency by up to 100× on some workloads and dramatically cut inference cost, enabling LLMs to run on commodity hardware"

"In summary, a diverse array of initiatives is actively fusing neurobiology and AI. Companies like Numenta and Inait explicitly apply brain principles (cortical columns, causal learning, continuous memory) to AI systems, while startups like Cortical Labs and BrainChip implement biological or spike-driven hardware. New research (e.g. Larimar, BriLLM, CTM) embeds cognitive concepts (episodic memory, signal timing) into LLMs and neural nets."

Also:

1. Static vs. Continuous Learning

LLM flaw: LLMs are trained on a fixed dataset and do not learn after deployment unless fine-tuned with expensive retraining. They can’t adapt in real time.

Brain advantage: The human brain learns continuously from experience. It can adapt moment-by-moment, retain or forget information flexibly, and generalize from limited data.

2. Lack of Embodiment

LLM flaw: LLMs are text-only, disembodied systems. They do not interact with the physical world or receive real sensorimotor feedback.

Brain advantage: Brains are embedded in bodies. Humans learn by interacting with and sensing the world—touch, movement, vision, hearing, smell, etc. This grounded learning leads to richer, more flexible understanding.

On distributed AI

. Autonomous Agents + Blockchain (Smart Contracts)

  • Concept: AI agents run autonomously, interact with smart contracts on-chain, and use the blockchain for identity, data integrity, and incentives.

  • Key Features:

    • AI logic off-chain (e.g., in Python/LLMs) but interacts with on-chain components.

    • Payments, access control, and decision logs are recorded on-chain.

  • Examples:

    • Fetch.AI – Decentralized agents for Web3 and IoT that interact via a multi-agent framework using machine learning and blockchain.

    • Ocean Protocol – Decentralized data exchange with AI/ML model sharing, where agents can request and pay for data or predictions.

    • SingularityNET – Marketplace for decentralized AI services, where agents and services can interact via smart contracts and APIs.

Comments

Popular posts from this blog

JVM Notebook: Basic Clojure, Java and JVM Language performance

On Unit Testing, Java TDD for developers to write

Is Java the new COBOL? Yes. What does that mean, exactly? (Part 1)