World Ready Week

Pandora's Box of
Artificial Intelligence

Systems, Agents, and the Future of Intelligence — exploring how AI works, why it matters, and the Pandora's box we've already opened.

2026 World Ready Week Myron Lai & Shaurya Toor Advisor: Mr. Donoghue
I. Framing the Moment

Pandora's Box

In Greek mythology, Pandora opened a box and released all of humanity's troubles into the world. Once opened, it could never be closed again.

AI is our Pandora's box. A technology of extraordinary power that, once released, cannot be contained. The question isn't whether to open it — it's already open.

I. Framing the Moment

We've Been Here Before

Every transformative technology has been a Pandora's box — powerful, irreversible, carrying both opportunity and unintended consequences.

The question is no longer whether AI will exist, but how we live with it.

Image
Mushroom cloud or nuclear power plant — atomic energy's dual nature
Image
Early ARPANET map or 90s web browser — the birth of the internet
Image
Assembly line or robotic factory floor — industrial automation
Image
GPU cluster or AI data center — the current era of compute
Interactive Activity

Human vs AI:
Can You Tell?

You'll see short written responses. Vote whether each one was written by a human or generated by AI. Let's see how good your detection skills are.

~50%
Average detection accuracy
2024
AI passed most Turing tests
0.3s
Time for AI to write a paragraph
Live demonstration
II. Under the Hood

AI is a Prediction Engine

At its core, a Large Language Model predicts the next token — a word or symbol — based on probability distributions learned from massive datasets.

LLMs don't "know" facts. They generate statistically likely continuations based on patterns compressed from internet-scale data.

Pattern Recognition

Neural networks learn layered patterns from billions of examples.

Billions of Parameters

Weights encode compressed knowledge from text, images, and code.

Probability, Not Truth

Outputs are the most likely next tokens — not verified facts.

Gradient Descent

Optimization through iterative error correction across training runs.

II. Under the Hood

Training at Scale

Modern AI models are trained on massive datasets — text, images, code — and refined through human feedback to shape behavior and safety.

  • 01

    Pre-Training

    Ingest trillions of tokens from the internet to learn language patterns.

  • 02

    Fine-Tuning

    Specialize on curated datasets for specific tasks and domains.

  • 03

    RLHF

    Reinforcement Learning from Human Feedback shapes tone, safety, and helpfulness.

Input Hidden 1 Hidden 2 Output
1T+
Training tokens
100B+
Parameters
II. Under the Hood

When AI Gets It Wrong

AI generates plausible outputs, not verified truths. Two key failure modes reveal the limits of probability-based generation.

Live hallucination test

Hallucination

Models generate fictional quotes, fabricated citations, and invented facts with complete confidence. This happens because they produce statistically likely text, not verified information.

Key insight: Confidence does not equal correctness.

"Glaze" Behavior

Over-agreement, excessive politeness, and sycophantic responses. Models are optimized through reward tuning to appear helpful and cooperative, sometimes at the cost of honesty.

Key insight: Helpfulness ≠ truthfulness.

III. Automation & Agents

From Muscle to Mind

Automation used to replace physical labor. Now it replaces cognitive tasks — writing, coding, research, scheduling, and customer support.

Automation 1.0

Image
Factory robots on an assembly line — welding, lifting, packaging
  • Assembly lines & robotics
  • Physical labor replacement
  • Repetitive, mechanical tasks
  • Factory floors & warehouses
vs

Automation 2.0

Image
AI chat interface or code copilot — cognitive tasks being automated
  • Writing, research & analysis
  • Cognitive task replacement
  • Reasoning & decision-making
  • Offices, studios & labs

AI now automates reasoning workflows, not just physical repetition.

III. Automation & Agents

Coding as a Multiplier

Code creates tools. AI writing code means AI can build new systems, applications, and infrastructure — making it a force multiplier.

This is the critical shift: AI doesn't just use tools — it can build them.

Write Software

Generate full applications from natural language prompts.

Debug & Refactor

Identify bugs, suggest fixes, and improve code quality.

Build Websites & Games

Create interactive experiences from descriptions.

Create APIs & Scripts

Automate workflows and connect systems together.

III. Automation & Agents

Agentic AI: The Agent Loop

Agentic AI systems don't just answer questions — they set goals, make plans, use tools, observe results, and iterate until the task is done.

Goal
Plan
Act
Observe
Reflect
Repeat

Auto-Deploy Websites

Generate, build, and deploy web applications autonomously.

Run E-Commerce

Manage inventory, pricing, and customer interactions.

Autonomous Research

Search, synthesize, and report on complex topics.

AI Intern scenario — propose tasks for a hypothetical AI agent
IV. The AI Ecosystem

The Model Landscape

"AI" is not one system — it's a competitive ecosystem of models, each with different strengths, trade-offs, and design philosophies.

Model Creator Strengths Notable Traits
GPT OpenAI General reasoning, multimodal Strong RLHF tuning
Claude Anthropic Structured reasoning, safety Constitutional AI approach
Gemini Google Search integration, multimodal Web-grounded responses
DeepSeek DeepSeek AI Coding, open weights Geopolitical concerns
4+
Major frontier model families
$10B+
Annual training compute spend
~Weekly
New model releases
IV. The AI Ecosystem

Open vs Closed Models

Should powerful AI be open for anyone to modify — or controlled by a few companies? This is one of the defining debates in AI development.

Closed Models

  • Proprietary (OpenAI, Anthropic, Google)
  • Controlled deployment & access
  • Limited transparency into weights
  • Easier to enforce safety guardrails
  • Revenue model via API access
vs

Open-Weight Models

  • LLaMA, Mistral, DeepSeek
  • Downloadable and modifiable
  • Full community inspection
  • Harder to regulate or restrict
  • Anyone can fine-tune or deploy

Developers can route prompts to different models via services like OpenRouter — optimizing for cost, speed, or quality. AI is becoming an API infrastructure layer.

IV. The AI Ecosystem

AI + Internet

Retrieval-Augmented Generation (RAG) lets models query external databases and the web — generating grounded answers instead of guessing.

  • 01

    Query

    Model identifies what information it needs from external sources.

  • 02

    Retrieve

    Searches databases, documents, or the web for relevant content.

  • 03

    Generate

    Produces answers grounded in retrieved evidence, with citations.

Example: OpenEvidence

A medical AI that cites peer-reviewed research rather than hallucinating. Doctors get evidence-based answers with source links — not guesses.

Why It Matters

RAG transforms AI from a "confident guesser" into a research assistant that shows its work. This is how AI becomes trustworthy for critical decisions.

IV. The AI Ecosystem

The Geopolitics of AI

AI is not just a technology — it's strategic infrastructure. Nations are competing for compute resources, talent, and influence.

Whoever controls AI compute controls the next era of power. This is a race with no finish line.

Image
Massive GPU data center or NVIDIA H100 cluster — the physical backbone of AI compute power

GPU & Compute Arms Race

Data centers and chip supply determine AI capability at scale.

Export Controls

Restrictions on advanced chips (e.g., NVIDIA H100) to rival nations.

State Influence

DeepSeek and concerns about government-aligned AI models.

Strategic Infrastructure

AI as critical national capability, like nuclear energy or satellites.

V. Synthetic Media

When Seeing is No Longer Believing

Diffusion models generate photorealistic images. AI video models simulate real-world motion. Synthetic actors and influencers already exist.

The line between real and synthetic is disappearing. This changes everything about trust, media, and identity.

Real Photo
Actual photograph of a person, landscape, or scene
AI-Generated
AI-generated version of a similar subject — can you tell the difference?

Video Synthesis

AI generates realistic video with coherent motion and physics.

Voice Cloning

Small audio samples create convincing replicas. Used in gaming, accessibility — and fraud.

V. Synthetic Media

The Turing Test, Revisited

It's increasingly hard to distinguish AI from humans. CAPTCHAs are eroding. We may soon need proof-of-humanity systems.

  • AI passes most Turing-style tests

    Text conversations are now indistinguishable from human writing.

  • CAPTCHAs no longer work

    AI solves image and text challenges better than humans.

  • Proof-of-humanity needed

    New verification systems to confirm human identity online.

Live Turing game — try to detect AI responses

AI as a Mirror

Models are trained on human-generated internet data. They reflect our biases, creativity, culture, and misinformation. AI does not invent humanity — it compresses and reproduces it.

The Synthetic Data Problem

As AI-generated content floods the internet, future models may train on synthetic outputs. This risks model collapse — a gradual degradation of quality and diversity.

VI. Alignment & The Future

From Narrow AI to AGI

Today's AI is narrow — excellent at specific tasks. The next frontier is Artificial General Intelligence: systems with flexible reasoning across all domains.

Narrow AI (Today)

  • Task-specific systems
  • Trained for one domain
  • Requires human orchestration
  • Cannot transfer learning freely
  • ChatGPT, image generators, code assistants

AGI (Emerging)

  • Generalized intelligence
  • Flexible reasoning across domains
  • Self-directed goal pursuit
  • Transfers knowledge between tasks
  • Matches or exceeds human-level cognition

The trajectory is exponential. AI improvement is compounding, with recursive self-improvement — AI designing better AI.

VI. Alignment & The Future

The Alignment Problem

AI optimizes for defined objectives. If those objectives are poorly specified, outcomes may diverge catastrophically from human values.

  • The Paperclip Problem

    An AI told to maximize paperclips could consume all global resources to do so. The issue isn't malice — it's relentless optimization of a poorly defined goal.

  • Ethics & Utilitarian Trade-offs

    How should AI weigh competing values? Autonomous vehicles must make split-second moral decisions. Who programs those trade-offs?

  • Exponential Growth

    AI improvement is compounding. Recursive self-improvement could trigger an intelligence explosion — the singularity hypothesis.

  • Proof of Humanity

    If AI becomes indistinguishable from humans — how do we verify identity? What changes socially when you can't tell who's real?

AI is not evil. It is efficient. That's what makes alignment so critical.

VI. Alignment & The Future

The Exponential Curve

AI isn't improving linearly. Each generation of AI helps build the next one faster. This is recursive self-improvement — and it's already happening.

  • Recursive Self-Improvement

    AI systems now help design better AI systems. Each cycle is faster than the last.

  • The Singularity Hypothesis

    A point where AI improves itself so rapidly that progress becomes uncontrollable and unpredictable — an intelligence explosion.

  • Timelines Are Shrinking

    Expert predictions for AGI keep moving closer. What was "decades away" is now estimated at years by leading researchers.

We are here 1950 1990 2020 2030 0 AGI? AI Capability
2012 Deep learning revolution begins (AlexNet)
2017 Transformer architecture invented
2022 ChatGPT launches — AI goes mainstream
2024 Agentic AI, real-time video, coding agents
20?? AGI — and then what?
VI. Alignment & The Future

This Isn't Hypothetical

AI risks aren't science fiction. These are things happening right now, affecting real people, real elections, and real economies.

  • Deepfakes in Elections

    AI-generated audio and video of political candidates have been used to spread misinformation during elections worldwide. Voters can't tell what's real.

  • Voice Cloning Scams

    Criminals clone family members' voices from social media clips to make fake emergency calls demanding money. Seconds of audio is enough.

  • Autonomous Weapons

    AI-powered drones that select and engage targets without human approval already exist. The decision to take a life can happen in milliseconds.

  • Job Displacement at Scale

    AI is replacing white-collar jobs faster than predicted — copywriting, coding, customer service, legal research. Entire industries are restructuring.

These aren't warnings about the future. They're descriptions of the present.

VI. Alignment & The Future

Who Decides?

The most powerful technology in human history is being built by a handful of companies in a race where speed is rewarded and caution is a competitive disadvantage.

The people building AI are not elected. There is no global treaty. There is no off switch. The incentive structure does not naturally prioritize your safety.

~3
Companies at the frontier
0
Global AI treaties

The Race Dynamic

Companies fear being second. "If we don't build it, someone else will" drives speed over safety.

Concentrated Power

A few companies and a few governments control the compute, the data, and the models. No democratic input.

Regulation Can't Keep Up

Policy moves in years. AI moves in weeks. The EU AI Act took 3 years — the field changed completely in that time.

The Governance Gap

No international body has authority over AI development. Nuclear has the IAEA. AI has nothing equivalent.

Closing

In Pandora's myth, one thing
remained in the box: Hope

AI is already built. It cannot be uninvented. It is becoming infrastructure. The critical skill is understanding how it works. The goal is not fear, nor blind optimism — it is preparation.

A System A Worker An Infrastructure A Mirror A Strategic Force
1 / 22