Coming Soon
E3OECD AILit — Engaging with AI, Competency 3
Can We Trust AI?
AI Isn't Perfect
AI can be impressively good, but it also makes mistakes, hallucinates facts, and can be used to create convincing fake content. Understanding AI's limitations helps you become a smarter user.
Spot the AI-Generated Content
In each round, one description is AI-generated and one is real. Can you tell which is fake?
Round 1/5
When to Trust AI
Trust more when: the task is well-defined, there's lots of good data, and you can verify the output.
Trust less when: the stakes are high (medical, legal), the AI is making claims about facts you can't verify, or the content seems too perfect.
Always verify: AI-generated text can sound confident while being completely wrong. This is called “hallucination.”
Trust less when: the stakes are high (medical, legal), the AI is making claims about facts you can't verify, or the content seems too perfect.
Always verify: AI-generated text can sound confident while being completely wrong. This is called “hallucination.”
Critical Thinking is Your Superpower
The best approach is to treat AI as a helpful tool, not an authority. Always cross-check important information from reliable sources.
Check Your Understanding
1. What is an AI 'hallucination'?
2. When should you be most careful with AI outputs?
3. What is deepfake technology?
4. What is the best way to use AI responsibly?
Answer all questions. You need 70% to pass.