A breakthrough AI-driven vision model developed by the HUN-REN Wigner Research Centre for Physics promises not only to deepen scientific insight into how the brain processes visual information but also to significantly improve the reliability and precision of machine vision systems, the research institute said on Tuesday.
According to the statement, the human brain is a complex network of tightly interconnected regions, linked through two-way pathways whose functions are still poorly understood. When we see something, the brain processes visual data across several levels from basic shapes to complex concepts. Traditional image-recognition AI, such as algorithms that identify a dog in a smartphone photo, relies on one-way processing: information flows only from lower to higher layers.
In contrast, the brain works bidirectionally. Neuronal responses at any stage are shaped not only by earlier processing steps but also by what is expected to happen at the next stage. This means the brain constantly integrates context, evaluating not just what we see, but what that visual input signifies, such as whether a dog is friendly or threatening, approaching or moving away. As a result, neural coding is influenced simultaneously by past and future processing steps.
The new model created by the Wigner Centre researchers replicates this two-way flow of information, producing an AI system that not only sees but also interprets visual data in a brain-like way. This approach allows for a more precise understanding of how the nervous system processes information and could lead to the development of machine vision tools that are more robust and adaptable.
The team, led by Ferenc Csikor, demonstrated that the human nervous system performs far more complex tasks than conventional image-recognition algorithms. Beyond identifying an animal, the brain evaluates intent, context, and potential outcomes. To mirror this flexibility, the researchers argue that AI must move from traditional deep discriminative models toward deep generative ones. These next-generation models, they say, could better resist errors and adversarial attacks, learn from fewer labelled examples, and underpin far more accurate vision systems.
The researchers’ findings were published on Friday in the journal Nature Communications.
Related articles:





