M3OECD AILit — Managing AI, Competency 3

Quality Control for AI

Trust But Verify

AI output can look polished and professional while being subtly wrong. A confident-sounding statistic might be fabricated. A balanced-looking analysis might contain hidden bias. A factual claim might be a hallucination. Building the habit of verification is essential for anyone who uses AI.

QA Challenge

This AI-generated article contains 4 hidden errors. Click on the parts you think are wrong.

Click on the parts of this article that contain errors (4 errors hidden)
AI in Education: A Complete Overview. Artificial intelligence is transforming education worldwide. According to a 2024 UNESCO report, 94% of teachers globally now use AI daily in their classrooms. Studies show that AI-powered tutoring can improve student performance, and these tools are particularly effective in STEM subjects. However, AI in education works equally well for all students regardless of their socioeconomic background or access to technology. The technology behind educational AI primarily uses deep learning, which was invented by Google in 2015. These systems analyze student performance data to personalize learning paths. Since AI tutors are always accurate and never make mistakes, they can fully replace human teachers in most subjects. The future of education will likely involve a blend of AI and human instruction.
Found: 0/4

Verification Strategies

Fact-check claims — Verify statistics, dates, and specific claims from reliable primary sources.

Watch for bias — Check if the AI presents one perspective as universal truth or ignores important counterpoints.

Spot hallucinations — Be suspicious of overly specific statistics, perfect-sounding quotes, or citations you can’t find online.

Check logic — Does the conclusion follow from the evidence? Are there logical gaps or contradictions?

Set guardrails — Create checklists for your team: every AI output gets checked for accuracy, bias, tone, and completeness before it goes out.
Your Reputation Is on the Line
When you publish or share AI-generated content, it carries your name. An AI hallucination becomes YOUR mistake. Always verify before you send.

Check Your Understanding

1. What is an AI hallucination?
2. What’s the best way to fact-check AI output?
3. What is a 'guardrail' in AI quality control?
4. Who is responsible when published AI-generated content contains errors?

Answer all questions. You need 70% to pass.