Skip to content

Differences in cognitive processes between humans and artificial intelligence unveiled by scientists, potentially carrying substantial consequences

AI's fundamental inability to form creative, intuitive connections calls into question its suitability for widespread use in tools we rely on.

Differences in cognitive processes between people and artificial intelligence unveiled by...
Differences in cognitive processes between people and artificial intelligence unveiled by researchers, potential repercussions potentially far-reaching.

Differences in cognitive processes between humans and artificial intelligence unveiled by scientists, potentially carrying substantial consequences

==========================================================================================================

In a groundbreaking study published in February 2025, researchers have underscored the importance of evaluating AI systems not just for their accuracy but also for the robustness of their cognitive capabilities. The study, titled "Transactions on Machine Learning Research," focused on the ability of large language models (LLMs) to form analogies.

Co-author of the study, Martha Lewis, provided an example of how AI struggles with analogical reasoning in letter string problems. For instance, when faced with the question, "if abbcd goes to abcd, what does ijkkl go to?", AI systems tend to get it wrong, while humans would correctly answer "ijkl" by removing the repeated element.

Similarly, in digital matrix problems, where the task was to complete a matrix by identifying the missing digit, humans performed better than AI. These findings suggest that AI's inability to perform effective zero-shot learning and analogical reasoning limits its decision-making in real-world contexts by reducing its ability to generalize from prior knowledge to novel, unseen problems and to draw meaningful parallels between similar but distinct cases.

This is particularly significant in the legal domain, where AI is increasingly used for tasks like research, case law analysis, and sentencing recommendations. The inability of AI to perform zero-shot learning and analogical reasoning could potentially impact real-world outcomes if it fails to recognize the application of legal precedents to slightly different cases.

Moreover, AI models were found to be susceptible to answer-order effects in story-based analogy problems. This indicates that the sequence in which information is presented can influence AI's responses, which could lead to inconsistent results.

The study does not mention any instances of AI 'hallucinating' or 'churning out BS', but it does highlight that most AI applications rely on the availability of large amounts of training data for pattern identification.

To improve AI generality and adaptability, recent research suggests that systems need symbolic representation, interactive feedback loops, and test-time task augmentations to better handle novel rule sets or compositional tasks. Practical expert insights in legal AI indicate the necessity of combining language models with symbolic AI elements and human expert oversight to help compensate for these reasoning deficits in complex domains such as litigation.

In summary, the study serves as evidence that we need to carefully evaluate AI systems for their cognitive capabilities beyond just accuracy. The limitations of AI in zero-shot learning and analogical reasoning underscore the need for hybrid approaches, including symbolic reasoning and human-in-the-loop systems, to ensure sound, contextually apt legal judgments.

Read also:

Latest