We have benchmarks for everything in AI. We can tell you how many parameters a model has, how fast it runs inference, how it scores on MMLU and HumanEval, and how much revenue the company behind it generated last quarter. We track these numbers obsessively. But here is a question we are not tracking with nearly the same rigor: what is AI doing to the children growing up inside of it?
We measure what we value. And right now, our measurement systems say we value model capability far more than we value the well-being of the generation that will inherit whatever we build.
That is the gap the AI Impact Index was created to close.
What the AI Impact Index is
The AI Impact Index is a framework for measuring the real-world effects of AI systems on children and young people. Not in the abstract — not as a policy paper or an ethical principles document — but as a structured, repeatable set of indicators that parents, educators, policymakers, and technologists can use to understand what is actually happening.
It was born out of a frustration I've carried for a long time. I have spent my career building AI systems for enterprise clients — banks, governments, healthcare systems, Fortune 500 companies. I understand what responsible AI looks like when the stakes are regulatory compliance and financial risk. But when I look at the AI systems touching my own children's lives — the recommendation algorithms, the adaptive learning platforms, the social media feeds, the generative tools they use for homework — I see nothing close to the same standard of care.
Enterprise AI has governance. Children's AI has terms of service.
The five dimensions
The AI Impact Index evaluates AI systems across five dimensions, each designed to capture a different facet of how AI affects the youngest users of these systems.
Cognitive development.
How does the AI system affect a child's ability to think critically, solve problems independently, and develop reasoning skills? Does it augment learning, or does it create dependency? An AI tutor that gives answers is different from one that teaches the child how to find answers. The distinction matters enormously for long-term cognitive development.
Emotional well-being.
What is the emotional impact of interacting with this system? Does it create anxiety, social comparison, addiction loops, or fear of missing out? Or does it foster confidence, curiosity, and self-expression? AI systems designed for engagement often optimize for time-on-screen. Time-on-screen is not the same as time-well-spent — especially for a developing mind.
Social development.
Is the AI system replacing human connection or enriching it? Children who spend their formative years interacting with chatbots and algorithmic feeds develop different social skills than children who spend that time in conversation, collaboration, and conflict resolution with other humans. We need to measure this.
Safety and privacy.
How does the system collect, store, and use children's data? Is the child's identity protected? Can the system expose them to harmful content, predatory interactions, or manipulative commercial practices? Enterprise systems are held to SOC 2, GDPR, HIPAA. What is the standard for a 10-year-old's AI tutor?
Agency and autonomy.
Does the system give the child meaningful choices, or does it make choices for them? A recommendation engine that narrows a teenager's worldview to a single feed is doing something fundamentally different from one that introduces them to ideas they wouldn't have found on their own. Agency — the ability to make informed decisions about one's own attention and learning — is a skill that must be developed. AI systems can support that development or undermine it.
Why this matters now
There is a generation of children — roughly ages 5 through 17 right now — for whom AI is not a future technology. It is the present. They are doing their homework with generative AI. They are watching algorithmically curated video from the moment they wake up. They are having conversations with chatbots. They are being assessed by adaptive learning platforms that decide what they learn next.
And we have no systematic way of knowing whether any of this is helping them or hurting them.
We are, effectively, running the largest uncontrolled experiment in the history of childhood development. And we're not even collecting the data.
A thought experiment
If a pharmaceutical company released a drug for children without testing its developmental effects, there would be congressional hearings, FDA interventions, and criminal liability. We hold food, toys, and car seats to higher safety standards than we hold the AI systems that shape how our children think, learn, and see themselves.
What parents should know
I'm not asking parents to become AI researchers. But I am asking them to ask better questions. When your child's school adopts an AI-powered learning platform, ask what data it collects. Ask whether it adapts to your child's learning style or just optimizes for test scores. Ask whether there is a human in the loop.
When your teenager tells you they're using ChatGPT for their essay, don't panic — and don't ignore it. Ask them what role the AI played. Did it help them organize their thoughts, or did it replace them? There is a meaningful difference between using AI as a thinking partner and using it as a shortcut. Children need to learn that difference, and they need adults who understand it well enough to teach it.
The AI Impact Index is designed to give parents a vocabulary for these conversations. It's not about banning AI from children's lives — that ship has sailed, and it wouldn't be the right answer even if it hadn't. It's about being intentional. It's about demanding the same standards of care for our children's AI experiences that we demand for their food, their education, and their physical safety.
What educators and policymakers should know
For educators: AI is in your classroom whether you adopted it or not. Your students are using it. The question is whether you will shape how they use it or let the platforms decide for you. The AI Impact Index gives educators a framework for evaluating which AI tools belong in a learning environment and which do not — not based on marketing materials, but on measurable impact on the five dimensions that matter for child development.
For policymakers: we need regulation that is specific, measurable, and enforceable. "AI ethics principles" are not enough. We need standards — the kind of standards we apply to every other product that touches children's lives. The AI Impact Index is intended to provide the measurement backbone that policy can be built on top of. You can't regulate what you can't measure.
Where this goes next
The AI Impact Foundation — the organization I founded to build and disseminate this work — is developing the first set of public scorecards for commonly used AI systems in education and social media. We are partnering with developmental psychologists, pediatricians, educators, and technologists to validate the framework. And we are building open-source tools that schools can use to evaluate AI products before they adopt them.
This is not a theoretical exercise. This is the most urgent product problem I know of: building the measurement infrastructure for the most consequential technology our children will ever encounter.
I turn research into products and products into businesses. This time, the product is a standard — and the customer is every parent, teacher, and policymaker who refuses to fly blind on the most important question in AI: what is it doing to our kids?