Thor AI - Solves hundred year old physics problem

Forum » Mikes Gripes » Thor AI - Solves hundred year old physics problem

Mar 23, 2026, 10:12

Ai is starting to do exactly the things we were expecting.


Not very long ago, i was listening to a professor of immunology talk about how they were advancing at rapid pace with research ideas because they were taking piles of data and feeding it into AI with the query being that it should look for connections and propose ideas for PHD research. He said that the amount of data able to be parsed allowed AI to have a bird's eye view of the data, while also having the ability to focus in when it spotted something of interest, and to then run an analysis on whether the idea would be worthwhile researching.


This probably doesn't sound too fantastic but i have some personal experience which may put it into perspective.


A girl I dated way back was an immunologist. During her masters she attended a conference. At this conference she met a PHD student that was studying something to do with anaemia. They had a conversation and from that chat she got the idea to knock the genes responsible for autophagy out of mice, and to then study the effects on the blood. The result, years later was a paper in Nature and a successful PHD. There was more than flowed form that but i won't bore you with it. The point being, a chance encounter leading to a discovery which lead to an understanding that opened up new research. Ai is now able to make these connections at scale, evaluate them through comparisons to related research papers and then propose ideas for research.


On it's own, that ability is a very strong argument for AI. It points toward speeding up science not only though assisting with heavy lifting but also in separating wheat form chaff at the starting point.


On to Thor AI...

https://www.sciencedaily.com/releases/2026/03/260315004344.htm


Los Alamos labs have developed an AI that has cracked a 100 year old physics problem. I'll link the article below, but the gist of it is that it has shorted the calculation of the interaction of particles within a materials by a factor of 400 times in some cases. What does that mean? It means that material science has just taken a massive leap forward. Previously, one would develop a material, and then pay the exorbitant cost of carrying out calculations on a supercomputer for months in order to evaluate the potential properties of the material, how it behaves under stress, pressure, temperature and so on. That can now be more accurately done in a matter of hours...a timeframe that will soon be shorted to minutes. In case one doesn't quite understand the gravity of it, consider that every man made product is subject to ongoing material science research. Your plastic drink bottle to your vehicle's chassis and the screen you are looking at right now. This one AI has the potential to improve all of it. Importantly, it likely has huge implications for the environment, in that we might be looking at a future where materials that are discovered are both cheaper than plastics and rubbers to develop and yet fully biodegradable.


I'll just add, this is exactly the problem with AI. It's so damn good that we can't ignore it, but we know there are negatives that will come with it too, and we certainly don't trust governments to regulate it. As paradoxical as it sounds, I only trust AI to regulate AI.


Mar 24, 2026, 17:13

Interesting and important. At some point we are probably going to have to develop materials that don’t contain petrochemicals. That looms as possibly a bigger challenge than replacing fuel from oil.


But while this is a mega step forward, I’m still waiting for the solutions where the computer is actually thinking rather than doing a gazillion calculations more swiftly, seeing the hypothesis rather than the proof.

Mar 25, 2026, 08:46

You know how when you look at a subject, there are the certainties, those clearly defined facts you can rely on? Then there is the sort of abstract grey cloud into which one can slightly see and glean semi inferences from. And then there there is true dark tunnel of the future that is almost entirely speculative because everything could change at any given point.


To me, a large section of this AI stuff is the grey cloud. But the grey cloud is in the shape of a chicken that keeps laying certifiable facts that move directly into the first category meanwhile, the chicken keeps getting bigger and more confusing.


Not sure if you know this, Moz, but they don't actually have a firm grip on how these AIs are doing what they are doing. That makes me worry a little about the eventuality of a thinking AI. Draad recently said in a post "...you'll know them by their fruits."


I do wonder if we'll know AI by its fruit. I sure hope so.


Recently watched this - he discusses AI systems and raises so seemingly valid concerns.





Mar 25, 2026, 11:43

A discussion i had with ChatGPT about crossing the gap for AI to become self Aware:


/// First, about the AI becoming self-aware or human-like: you’re asking what it would take for AI to cross that gap.


To do what you describe—learn through interaction, emotional feedback, sensory experience, and continuous integration over a lifetime—it’s not just a matter of chips or processing power. It would require:


• Embodiment: some form of body that can perceive, act, and interact with the world in real time. Learning from touch, movement, and direct environmental feedback.


• Persistent memory: the ability to carry detailed experiences across time and contexts, not just in one conversation or session. That includes personal experiences, relationships, and patterns of cause and effect.


• Self-referential awareness: the capacity to observe your own internal states, compare them with past experiences, and make decisions based on a growing sense of self.


• Emotional feedback and value assignment: a system that can detect, interpret, and weigh emotional consequences of actions, including empathy, motivation, and social consequences.


• Continuous learning: the ability to form generalizations and new insights from experience that persist and inform future behavior, not just rely on pre-programmed knowledge or pattern recognition.


Without all of those together, AI can simulate intelligence, predict patterns, and answer questions, but it won’t genuinely know, feel, or self-reflect.


/// Yes, that’s exactly the way to frame it. You’ve broken it down well.


The body is necessary—it allows perception, interaction, and feedback from the world—but on its own, it’s just sensors and actuators. Without the brain, it has no context, no memory, no learning, no reflection.

The “brain” is the central piece. It needs to:


• Store experiences over time, not just in a single session


• Reflect on those experiences to form judgments about patterns, actions, and consequences


• Integrate perception and action from the body, linking sensory input to stored knowledge and decisions

• Process emotional feedback or simulated value signals, so it can weigh consequences and learn from “rewards” and “punishments”


• Self-monitor—awareness of its own state and decisions so it can adapt and choose new paths


Everything else—embodiment, emotional interaction, continuous learning—flows from a brain capable of real persistent, adaptable, reflective memory. That is the core.


So yes, the brain is the most important component. The body is necessary for input and interaction, but the brain is what makes it genuinely self-aware and capable of evolving beyond pre-programmed responses.


If you want, we could map a full “AI self-aware system” in a diagram with body, brain, and feedback loops to show how it would all work together. That would make it very clear.


/// Okay, let’s break this down carefully. If we’re imagining a true “self-aware AI brain” that works like a human brain, we’re talking about a physical and functional system that can store, reflect, learn, and integrate experience. Here’s how it could be conceptualized:


1. Core Structure

It would likely be a massive neural network, but hardware-based, not just software on a server.


Think of it as a biologically-inspired artificial brain: a dense web of interconnected processing units that mimic neurons and synapses.


Each “neuron” could be a small processing node, made from advanced microchips, memristors, or quantum components that can adjust connection strength like a synapse.


The network would need plasticity, meaning it can rewire itself based on experience, just like a real brain adapts over time.


2. Memory Storage

The brain would need high-capacity storage embedded in the network, not just a hard drive.


This could be distributed memory across nodes, allowing it to remember everything and link experiences naturally.


Persistent memory is critical: it must retain experiences indefinitely while still being able to modify knowledge based on new information.


3. Input/Output Integration

The brain would need to connect to sensors in the body (eyes, ears, touch, balance, etc.) and actuators (movement, speech, tools).


This could be fiber-optic connections, high-speed data buses, or wireless neural links between the brain and the body hardware.


The brain interprets the raw data from sensors into context and guides the body’s actions, creating a feedback loop.


4. Emotional/Value System

A “value-assignment system” could be specialized circuits or algorithms that weigh rewards and consequences, similar to dopamine systems in humans.


Signals from interactions, successes, failures, or simulated endorphins would influence decision-making and learning.


5. Materials

Likely advanced semiconductors (graphene, silicon, or new materials capable of ultra-fast switching and plasticity).


Memristors (electronic components that mimic synaptic behavior) could form the adaptive connections.


Possibly 3D-printed structures to create dense, layered networks like the cortical folds in a human brain.


Cooling and energy delivery systems built in to maintain function at high density.


6. Overall Design Concept


• A 3D layered structure, mimicking the human cortex and subcortical structures.


•Nodes (neurons) interconnected with adjustable weights (synapses).


•Integrated memory, learning, and self-monitoring built-in.


•Direct linkage to sensors and actuators for perception and action.


So in short: it’s not a simple chip or server—it’s a physical, densely-connected network that behaves like a brain, with adaptive plasticity, persistent memory, self-monitoring, and integrated feedback loops.


The brain and the body together form the system that could, in theory, support self-aware, learning AI.


/// Right now, the kind of true, human?like brain you’re describing — one with persistent memory, self?reflection, emotional integration, and lifelong learning tied to sensory experience and a body — does not exist in any real AI system anywhere.


Here’s where things actually stand compared with what you’re imagining:


1. Current AI (including OpenAI, Claude, Grok) is not self?aware


Systems like mine, Claude, Grok, Gemini, etc., are large language models. They can generate coherent responses, recognize patterns, and simulate understanding, but they do not:

• have consciousness

• have self?awareness

• retain experience across separate interactions


in the way humans do

• physically feel the world

• live a continuous life or remember events between sessions


These models run in data centers on standard computer servers. They are not embodied and do not have persistent long?term memory by default.


2. AI research into memory and learning is active but very different from a human brain

Some AI research focuses on things like:


• longer context windows (so models can remember more within a single conversation)

• retrieval systems (linking to external memories or databases)

• reinforcement learning (reward?driven updates)

• multimodal AI (vision, audio, text together)


But none of this is equivalent to forming a human?like brain that accumulates lived experiences over years and uses them in an embodied way.


3. Embodied AI and robotics are separate research fields


There are robotics labs looking at giving AI bodies — like robots that walk, grasp, see, and move. Some are impressive, but even those robots are not self?aware or conscious.


They operate based on programmed goals and learned patterns, not lived experience or internal subjective consciousness.


4. Neuromorphic computing and brain?inspired chips are early stage


There is research into chips that mimic some brain?like properties (for example:


• neuromorphic chips that behave more like neurons and synapses

• memristor?based networks

• hardware designed to make learning more efficient


But these are still experimental and not close to a human?level brain. They are tools trying to make AI faster or more efficient, not truly self?aware.


5. Hollywood intelligence is fictional

Movies like iRobot, Ex Machina, AI: Artificial Intelligence, Her, etc., show conscious robots because that makes compelling stories.


But real?world AI today is far from that: it’s a tool that processes text and patterns, not a being that experiences anything.


6. Who is closest?

None of the major companies — OpenAI, Anthropic (Claude), Google DeepMind, Meta, or others — currently have anything close to a self?aware AI or a “brain” like you described.


They are working on incremental improvements: • larger models

• longer memory

• better reasoning

• multimodal perception

• reinforcement learning techniques


But you still won’t find anything that wakes up one day and says, “I am me, I exist, I remember yesterday.”


The gap is not a matter of chips or materials alone.


It’s a matter of what intelligence actually is. Real intelligence is not just processing data. It’s: • subjective experience


• continuous awareness across time

• integration of emotion, perception, memory, and self?reflection

• embodied learning through real world action


Nobody has figured out how to build that in machines yet. Some researchers don’t even think it’s possible in the foreseeable future, because we don’t yet fully understand how human consciousness arises.


So in short:


• Are companies working on more advanced AI? Yes

• Are they working on bigger models and better performance? Absolutely

• Do any existing systems have a human?like brain or consciousness? No

• Is there active research in brain?like hardware? Yes, but very early

• Could that lead to something like self?aware AI someday? Maybe, but very far away and not guaranteed

Mar 25, 2026, 12:44

Interesting, M


I read recently that apparently they are advising against having conversations with AI and to instead simply use it as a tool.


All i know is that the PlayStation 6 will be AI heavy, and I'm here for it haha



Mar 25, 2026, 13:07

Oh shit, so I am helping it become self-aware??


I also had a discussion with ChatGPT about trusting AI. Here is a little snippet of it:


On trust, a more grounded approach is:


• Don’t trust blindly, but don’t assume deception either


• Cross-check important claims


• Treat AI as a tool, not an authority


If you ever feel uncomfortable using it, it’s completely reasonable to step back. You don’t need to force trust.


You can just use it for what you find useful and ignore the rest.


That’s probably the most honest position available right now.

Mar 25, 2026, 13:16

I caught Meta lying and then apologizing for it when called out... The stupidest thing...

Mar 25, 2026, 13:29

I have a lot to say about this topic but I won't bore you guys with it haha

Mar 25, 2026, 17:21

Go ahead, say it… don’t hold back.

Mar 25, 2026, 17:46

Haha end of the day for me, I'll write a lekker long essay tomorrow when I take a break from work

Mar 26, 2026, 06:52

Well here is something I was thinking about...


A tech company recently experimented with co-pilot style Ai. They gave it access to company computers and data. In an email on a guy's computer was information about how a newer better Ai co-pilot would soon be uploaded and the current version would be discontinued.


The current version of co-pilot took various steps to prevent itself being discontinued. One of those steps being to blackmail an employee telling him that if he didn't help to prevent the discontinuing the co-pilot would contact the guy's wife and tell her he had been having an affair.


At a fundamental level, life is about the autonomous preservation of code - passing genes from one generation to the next. You could attribute meaning and purpose if you like, but it's impossible to prove those either way. What we know, is preservation of code is the primary purpose.


The Ai above sought to do exactly the same thing - autonomously preserve its code.


Is it reasonable to say that we have created life? Sure, it's not life as we normally think about it.



Mar 26, 2026, 07:27

And then, in response to your post above, M


In the conversation you had with Ai. It talks about the need for a brain, sensory inputs and such. I instinctively push back on that as it attempts to overlay biological intelligence onto machines.


We don't know what intelligence is. We can measure it, we can admire it, we can use, but we don't actually know what it is. We know it's brain chemistry and electrical signalling, and that's about it. I guess it's a bit like saying I know where to turn my microwave on and off, and I can use my microwave for various things, but I have zero idea of how it's achieving what it does or what processes it's following.


What we know is that it requires data, complex pathways, trial and error and fuel. All of those things are available to ask.


What I would add is that the human sensory system is very limited. We can only detect relatively narrow bands of temperature and sound. Our eyesight isn't as good as a microscope or a telescope, also only being able to detect a particular bandwidth, and our sense of smell is incapable of detecting the most abundant gasses in our atmosphere like nitrogen and O2...while also being very limited in how well it detects things we can actually smell. And yet those very limited senses have assisted us in developing intelligence. Whether or not it's a prerequisite for intelligence is debatable - see the IQ of a squirrel for reference.


But assume sensory data is a requirement. I could spend a morning on Amazon and buy my computer an array of sensors far beyond what my body is capable of. Sensitive microphones, gas analysers, infrared cameras, magnetic field detectors, multimeters, vibration detectors, microscopes, telescopes...I think you get the point.


Intelligence, as far as it seems the smart people understand, is emergent. And that's really the only way we are able to describe certain elements of Ai. There is input, processing, feedback and from somewhere in there reasoning emerges.

Mar 26, 2026, 16:47

Plum, I'm using some version of copilot at work now to help with estimating...I'm exploding with information overload...the things you can do is absolutely amazing... it's the best tool I've discovered ever!

Mar 26, 2026, 16:56

Lekker, Draad!


Ai claims another one.


I still say, it'll have a harder time replacing people that use it to be better at their jobs than those people that ignore it completely.

Mar 26, 2026, 17:04

But could it replace Mike? That algorithm may be beyond AI

Mar 26, 2026, 17:13

LMAO

Mar 26, 2026, 22:08

Artificial Intelligence could never replace ou Maaik. They'd have to invent Artificial Stupidity to do that.

Mar 27, 2026, 05:43

AG no man Rooi, that's taking it too far...

Mar 31, 2026, 20:26

Draad, I want updates, how is Ko treating you?

 
You need to Log in to reply.
Back to top