A researcher presents a picture of a newborn to an algorithm inside a children’s hospital in Washington, D.C., which is close to the U.S. Capitol. In a matter of seconds, the software measured the infant’s eye angle, nose bridge width, and the distance between specific facial landmarks. It also identified a potential chromosomal abnormality that no one in the room had yet to name out loud. The infant is two days old.
The program, known as mGene, was created at Children’s National by Marius George Linguraru and his colleagues. As of right now, it has accuracy rates well over 90% for identifying four serious genetic syndromes: Down, DiGeorge, Williams, and Noonan. The fact that it was trained on infants from twenty different nations is more significant than it may first appear. As Linguraru has pointed out, the majority of geneticists receive their training from textbooks that mostly feature cases from northern Europe. Theoretically, the algorithm has a wider perspective than the expert who trained it.
| Topic | AI in Diagnosing and Aiding Learning Disabilities |
|---|---|
| Key Institution — ADHD Research | Duke Health, Duke Department of Psychiatry & Behavioral Sciences |
| Research Date | April 2026 |
| Key Finding — ADHD | AI can analyze routine electronic health records to estimate ADHD risk years before typical diagnosis |
| Key Finding — Dyslexia | AI-powered eye-tracking identifies dyslexia with up to 90% accuracy |
| Key Finding — ASD/ADHD Motor | AI motor pattern analysis diagnoses ASD or ADHD with 86% accuracy in 15 minutes |
| Key Tool Referenced | DytectiveU — adaptive AI platform for dyslexia intervention |
| Key Tool — Genetic Disorders | mGene — facial landmark AI app developed at Children’s National, Washington D.C. |
| mGene Lead Researcher | Marius George Linguraru, Sheikh Zayed Institute for Pediatric Surgical Innovation |
| Published Review — Neurodevelopmental AI | PMC 2026, lead author Siham Mohamed |
| Conditions Covered | ADHD, Dyslexia, Autism Spectrum Disorder (ASD), Developmental Language Disorder |
| Primary Concern | Algorithmic bias — AI trained predominantly on white, middle-class populations |
| Emerging Framework | “Human-in-the-loop” — AI as decision-support tool, not replacement for specialists |
| PBS Source Author | Jackie Snow, NOVA Next |

Particularly in the diagnosis of learning disabilities and neurodevelopmental conditions, this concept—the algorithm seeing what humans miss and doing it faster—is increasingly making its way from research papers into real clinical discussions. According to a study released by Duke Health in April 2026, AI can predict a child’s likelihood of developing ADHD years before a traditional diagnosis is usually made by evaluating routine electronic health records. By capturing gaze pauses during reading, AI-powered eye-tracking systems are also identifying dyslexia with up to 90% accuracy. Traditionally, this task required lengthy assessment periods and specialist referrals, which many families, especially in under-resourced communities, simply never access.
This has a strong attraction. In many states in the United States, waiting lists for evaluations of autism and ADHD can last for months or even more than a year. Instead, children who could have benefited from early intervention support are sitting in classrooms where their needs are ignored and their struggles are misinterpreted as behavioral issues, a lack of effort, or something in between. The question of whether to implement an AI system that analyzes motor patterns and can diagnose ASD or ADHD with 86% accuracy in 15 minutes seems less philosophical and more pressing.
Siham Mohamed and colleagues conducted a thorough 2026 review that was published in PubMed Central and looked at the use of AI in neurodevelopmental disorders such as autism, ADHD, dyslexia, and developmental language disorder. According to the review, machine learning and deep learning techniques are actually increasing diagnostic accuracy, especially when it comes to their capacity to incorporate what scientists refer to as multimodal data, which includes behavioral observations, neuroimaging, genetic profiles, and written language samples examined using natural language processing. A single clinician working in a 30-minute appointment is structurally unable to accomplish what the combination of data types does.
However, the review is cautious about something that is sometimes overlooked in the excitement: algorithmic bias is not a small technical detail. It is a major issue. When applied to children from other demographic backgrounds, AI systems that were primarily trained on white, middle-class pediatric populations clearly perform worse. In a situation where the stakes are not abstract, the same pattern that has been repeatedly seen in radiology—models trained on small datasets perform poorly when deployed outside of their original conditions—applies here as well. An eight-year-old who is not diagnosed with dyslexia is not a data point. It is years of a child thinking they are just not intelligent enough.
This story has a familiar shape that is difficult to ignore. Despite repeated predictions of obsolescence, the field of radiology has spent the better part of a decade learning that AI’s benchmark performance and its real-world performance are not the same thing, and that human radiologists are still as important and busy as ever. The lesson is that integrating AI into complex human systems results in complex human outcomes that pure accuracy metrics cannot predict. This lesson was painstakingly learned through mammography computer-aided detection programs, which ultimately increased biopsies without catching more cancer.
The idea of the human-in-the-loop appears to be what researchers working on learning disability AI are coming to, albeit reluctantly considering how alluring full automation sounds. AI as a diagnostic tool, highlighting patterns and risk that a teacher, pediatrician, or school psychologist can then look into with the right context and expertise. Not a substitute. An extremely knowledgeable assistant. It’s still unclear if insurance, healthcare, and educational systems are built to effectively use that kind of tool, or if, as has happened in the past, the technology will arrive before the infrastructure necessary to use it responsibly is constructed.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
