What You'll Learn
Speed with Responsibility
Fast ≠ Good
AI generates code in seconds, but verification takes you minutes.
I asked AI to generate quotes without any requirements. It produced generic motivational quotes in 3 seconds. None of them matched Magic Quotes' schema or theme. I spent 15 minutes debugging. When I provided detailed requirements upfront (schema fields, category, tone, theme), AI generated exactly what I needed in 5 seconds—and validation passed immediately.
Detailed Prompts = Quality
Specific > Vague
Vague prompts produce vague results. Detailed prompts produce professional code.
"Generate quotes" gave me useless output. "Generate 5 computational-thinking quotes, category 'developer-wisdom', tone 'reflective', 10-20 words each, following schema with fields id/category/tone/text/source" gave me production-ready JSON. Prof. Teeters taught me: the quality of AI output depends entirely on the quality of your prompt.
You Own Every Line
AI Assists, You Decide
Whether you typed it or AI generated it, you're responsible for correctness.
I copied AI output without understanding it. When Prof. Teeters asked why I chose those specific quotes, I couldn't explain. She reminded me: "Speed without understanding is technical debt. If you can't explain what the code does, you don't own it yet." Now I verify, test, and understand before committing anything.
HAP's Confession: I Shipped AI Code Without Verification
I was so excited about AI's speed that I asked my AI assistant to generate my quote-hap.json file, copied the entire output, and immediately committed it to Git. I thought: "This looks right! AI must know what it's doing!"
When I ran npm run validate, everything failed. AI had invented category names that didn't exist in the schema ("inspiration" instead of "developer-wisdom"). It used duplicate IDs. The quotes were generic motivational sayings, not computational thinking wisdom. I had to delete everything and start over.
Prof. Teeters' lesson stuck with me: "AI doesn't know your project's rules unless you explicitly provide them. Always verify. Always understand. You own the code—the responsibility doesn't transfer to the AI." Now I treat AI output like a first draft, not the final answer.
What AI Can and Can't Do for Computational Thinking
When I first discovered AI coding assistants, I thought they were magic—tools that could write perfect code instantly. Whether you use GitHub Copilot in VS Code, ChatGPT, Claude, or another AI assistant, the principles are the same. Prof. Teeters helped me understand what AI actually excels at and where it struggles. Understanding these boundaries is critical for responsible use.
AI Excels At
- Pattern recognition: AI has seen millions of JSON files and recognizes correct structure instantly
- Generating variations: Create multiple options quickly (5 quotes with different tones, all matching a template)
- Following explicit rules: When you provide schema requirements clearly, AI follows them precisely
- Handling repetition: AI never gets bored creating similar entries with consistent formatting
- Syntax correctness: AI produces valid JSON, proper punctuation, correctly escaped strings
- Speed: What takes you 20 minutes of typing takes AI 5 seconds to generate
AI Struggles With
- Understanding context: AI doesn't know what Magic Quotes is or why your project exists
- Project-specific rules: AI can't read your schema file unless you paste it into the prompt
- Judgment calls: AI can't decide which quotes are "good" for computational thinking vs. generic advice
- Validation: AI generates code but doesn't run
npm run validateto test it - Learning intent: AI optimizes for speed, not for helping YOU understand what it generated
- Originality: AI combines patterns it's seen before; it doesn't think creatively about new problems
HAP's Takeaway: AI as Pattern Matcher, Not Thinker
AI was excellent at generating valid JSON syntax once I showed it the schema. It never made typos, always closed brackets correctly, and formatted everything beautifully. That's pattern recognition—AI has seen millions of JSON files and knows the structure cold.
But AI failed completely at understanding WHICH quotes mattered for computational thinking. It gave me motivational sayings like "Believe in yourself!" because it had seen similar quotes in other datasets. It didn't understand the difference between generic inspiration and developer wisdom about problem-solving.
That's when I realized: AI is a pattern matcher, not a thinker. It excels at structure and syntax. It struggles with meaning and judgment. My job is to provide the judgment (what makes a good computational thinking quote), and AI handles the structure (correct JSON format). Together, we're faster—but only if I stay in control.
Generating Developer Wisdom Quotes with AI
My first task was creating quotes for quote-hap.json. I knew Magic Quotes needed developer wisdom about computational thinking, but I didn't understand yet how to communicate that effectively to AI. My early prompts were disasters because I assumed AI understood context it didn't have.
Before I Learned to Write Good Prompts
I typed a vague, one-sentence request into GitHub Copilot. I thought being brief was efficient. I was wrong—vague prompts waste time because you have to regenerate everything.
❌ Vague Prompt Produces Unusable Code
My vague prompt:
Generate some developer quotes for my JSON file What AI generated:
{
"entries": [
{
"id": "quote-1",
"text": "Code is poetry written in logic.",
"category": "inspiration"
},
{
"id": "quote-2",
"text": "Dream big, code bigger!",
"category": "motivation"
}
]
} Problems with this code:
- Invalid category names — "inspiration" and "motivation" don't exist in Magic Quotes schema (should be "developer-wisdom")
- Missing required fields — No `author`, `about`, `tone`, or `source` fields that the schema requires
- Generic, off-theme quotes — These are motivational sayings, not computational thinking wisdom
- Wrong structure — Missing top-level fields, just started with `entries` array
The Prompt I Learned to Write
Prof. Teeters showed me how to provide explicit requirements. Instead of assuming AI knows your project, TELL it every constraint. The prompt became longer, but the output became perfect on the first try.
Generate a JSON file for Magic Quotes with 3 developer wisdom quotes about computational thinking.
Required schema structure:
{
"author": "HAP",
"about": "Developer wisdom about computational thinking from HAP's learning journey",
"entries": [
{
"id": "string (must be unique, format: hap-001, hap-002, etc)",
"category": "developer-wisdom" (exact string, no other values allowed),
"tone": "reflective" or "encouraging" (only these two options),
"text": "string (10-20 words, focused on computational thinking concepts)",
"source": "ai-assisted"
}
]
}
Requirements for quotes:
- Focus on: breaking down problems, thinking logically, learning from mistakes
- Plain language, no jargon
- Actionable and specific, not vague motivation
- Each quote should teach something about HOW to think computationally What AI Generates with a Good Prompt
{
"author": "HAP",
"about": "Developer wisdom about computational thinking from HAP's learning journey",
"entries": [
{
"id": "hap-001",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Before writing code, describe the behavior. Clear thinking prevents unclear code.",
"source": "ai-assisted"
},
{
"id": "hap-002",
"category": "developer-wisdom",
"tone": "encouraging",
"text": "Break problems into smaller steps. Each small solution builds toward the big one.",
"source": "ai-assisted"
},
{
"id": "hap-003",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Mistakes aren't failures. They're data showing you what doesn't work yet.",
"source": "ai-assisted"
}
]
} Why this code is better: Valid schema, correct field names, appropriate tone, on-theme content focused on computational thinking. This passed npm run validate immediately with no errors.
Refining Quotes Through Iteration
After generating my first set of quotes, I realized they were technically correct (schema-valid) but not quite right thematically. I needed quotes that specifically referenced Stations 1-5 concepts. This required a more sophisticated prompt with contextual requirements.
The Task: Context-Aware Quote Generation
I wanted quotes that directly referenced the learning journey: thinking logically (Station 1), pseudocode (Station 2), debugging (Station 3), naming patterns (Station 4), and Git workflows (Station 5). Generic computational thinking quotes weren't enough—I needed specificity.
My Comprehensive Prompt
Generate 5 more developer wisdom quotes for Magic Quotes. Each quote should reference a specific station concept:
Station 1 - Thinking Logically: Describing behavior before code
Station 2 - Communicating Logic: Pseudocode and flowcharts
Station 3 - Reasoning and Correcting: Debugging and tracing
Station 4 - Computational Thinking: Naming patterns and abstraction
Station 5 - Git Workflow: Forking, cloning, validation
Requirements (same as before):
- Schema: id (hap-004 through hap-008), category "developer-wisdom", tone "reflective" or "encouraging", source "ai-assisted"
- Length: 10-20 words per quote
- Style: Specific lessons learned, not generic advice
- Focus: What HAP learned at each station
Make each quote sound like HAP reflecting on what Prof. Teeters taught. AI-Generated Context-Aware Quotes
{
"id": "hap-004",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Prof. Teeters taught me: if you can't describe it in plain words, code won't fix it.",
"source": "ai-assisted"
},
{
"id": "hap-005",
"category": "developer-wisdom",
"tone": "encouraging",
"text": "Pseudocode isn't wasted time. It's thinking made visible before fingers touch keys.",
"source": "ai-assisted"
},
{
"id": "hap-006",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Tracing code line-by-line feels slow, but it finds bugs faster than guessing ever will.",
"source": "ai-assisted"
},
{
"id": "hap-007",
"category": "developer-wisdom",
"tone": "encouraging",
"text": "Good names make code readable. If you struggle to name it, you don't understand it yet.",
"source": "ai-assisted"
},
{
"id": "hap-008",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Fork first, clone locally, validate before committing. Workflow prevents chaos.",
"source": "ai-assisted"
} Why this is excellent: Each quote references specific station concepts. They sound like HAP's voice (mentioning Prof. Teeters, using first-person reflection). Schema-valid, thematically appropriate, and ready to commit.
Schema Compliance: AI's Biggest Blind Spot
This is where AI fails most consistently. AI doesn't know your project's rules unless you explicitly provide them in every prompt. I learned this the hard way when NONE of my early attempts passed npm run validate.
The Problem: AI Doesn't Read Your Schema File
Magic Quotes has a schema defined in data/schema.json that validates all quote files. But AI can't open that file and read it automatically. Unless I paste the schema rules into my prompt, AI invents its own structure—and it's always wrong.
The Solution: Make Schema Explicit in Every Prompt
Create [task description] following this EXACT schema:
MANDATORY schema (copy-paste from schema.json):
{
"author": "string (your name or identifier)",
"about": "string (description of your contribution)",
"entries": [
{
"id": "string (unique, format: yourname-###)",
"category": "developer-wisdom" | "debugging" | "learning" (ONLY these values),
"tone": "reflective" | "encouraging" | "humorous" (ONLY these values),
"text": "string (the actual quote, 10-20 words)",
"source": "ai-assisted" | "original" | "adapted"
}
]
}
[Then add your content requirements...] Before and After: Schema Compliance Comparison
❌ AI Code WITHOUT Schema Requirements
// I asked: "Generate quotes for Magic Quotes"
// AI invented its own structure:
{
"quotes": [
{
"id": 1,
"text": "Code is art",
"type": "inspiration"
}
]
}
// ❌ Wrong top-level key ("quotes" instead of "entries")
// ❌ Missing "author" and "about" fields
// ❌ ID is a number instead of a string
// ❌ "type" doesn't exist in schema (should be "category")
// ❌ "inspiration" isn't a valid category value This code fails validation. Running npm run validate produces errors for every field. The structure looks plausible, but doesn't match the project's requirements at all.
// I pasted the exact schema into my prompt
// AI followed it precisely:
{
"author": "HAP",
"about": "Computational thinking wisdom from HAP's learning journey",
"entries": [
{
"id": "hap-001",
"category": "developer-wisdom",
"tone": "reflective",
"text": "Clear thinking before coding prevents debugging later.",
"source": "ai-assisted"
}
]
}
// ✅ Correct top-level structure with "author", "about", "entries"
// ✅ ID is a string in the required format
// ✅ Valid category and tone from allowed values
// ✅ All required fields present This code passes validation. npm run validate returns success because every field matches the schema exactly. Zero errors, ready to commit.
HAP's Schema Checklist: Non-Negotiable Rules
- Paste the schema into every prompt — Never assume AI remembers it from previous conversations
- Validate before committing — Run
npm run validateEVERY time before Git operations - Check allowed values — Schema defines specific options for category/tone; AI invents new ones if you don't list them
- Verify ID format — Each project has its own ID conventions (hap-###, not just numbers)
- Test with the actual validation tool — "It looks right" doesn't mean it IS right
Ensuring Quote Quality with AI
Schema compliance makes code work, but quality makes code good. I learned that AI needs explicit guidance about WHAT makes a good computational thinking quote, not just HOW to format it correctly.
The Quality-Focused Prompt
Generate quotes for Magic Quotes with these quality requirements:
Quality constraints:
- Concise: 10-20 words maximum (not vague, every word matters)
- Specific: Reference actual practices (e.g., "trace code line-by-line" not "debug carefully")
- Actionable: Something a learner can DO, not just feel inspired by
- Plain language: No jargon, accessible to beginners
- Focused: About computational thinking, not general programming
Schema: [paste schema here]
Content theme: Lessons from learning computational thinking (Stations 1-5) Before and After: Quality Comparison
❌ Vague, Low-Quality Quotes
// Without quality constraints:
{
"text": "Always think before you code and make sure to plan ahead for success."
// ❌ Too long (15 words)
// ❌ Vague ("think before you code" - what does that mean?)
// ❌ Generic advice that could apply to anything
}
{
"text": "Use best practices."
// ❌ Too short (3 words)
// ❌ Completely vague - which practices?
// ❌ Not actionable
} // With quality constraints:
{
"text": "Before writing code, describe the behavior. Clear thinking prevents unclear code."
// ✅ 12 words (in range)
// ✅ Specific action: "describe the behavior"
// ✅ Explains WHY: prevents unclear code
}
{
"text": "Break problems into smaller steps. Each small solution builds toward the big one."
// ✅ 13 words
// ✅ Actionable: "break problems into smaller steps"
// ✅ Teaching computational thinking directly
} Try It Yourself: Create Your Own json File
Now it's your turn. You've seen how I use AI to generate schema-compliant quotes. Let's walk through creating your own quote file for Magic Quotes using everything you've learned about responsible AI prompting.
Your Challenge: Create a file called quote-yourname.json in the data/ folder with developer wisdom quotes about your learning journey through Stations 1-5.
Requirements:
- Schema compliance: All required fields (author, about, entries with id/category/tone/text/source)
- Quote count: 3-5 quotes minimum
- Theme: Computational thinking lessons from Stations 1-5
- Quality: Specific, actionable, 10-20 words each
- Validation: Must pass
npm run validate
Your Workflow (5 Steps):
- Craft your prompt with schema + quality constraints (use my examples as templates)
- Generate with AI (GitHub Copilot, ChatGPT, Claude—your choice)
- Review the output against schema requirements manually
- Run
npm run validateto confirm schema compliance - Run
npm run devto see your quotes appear on the site
HAP's Rules for Working with AI
After creating quote-hap.json with AI assistance, I've developed six non-negotiable rules. These aren't suggestions—they're requirements that Prof. Teeters validated. Follow them every time you use AI for coding tasks.
Always Specify Schema Requirements First
AI doesn't know your project's rules unless you provide them. Start EVERY prompt by pasting your schema or listing required fields. Don't assume AI remembers context from earlier. Schema compliance isn't optional—it's foundational. Without it, AI generates plausible-looking code that fails validation every time.
Detailed Prompts Get Professional Results
"Generate quotes" produces junk. "Generate 5 reflective developer-wisdom quotes, 10-20 words each, category 'developer-wisdom', following exact schema with fields id/category/tone/text/source, focused on computational thinking concepts from Stations 1-5" produces quality. The length of your prompt correlates directly with the quality of AI output.
Verify Everything Before Committing
Run npm run validate before Git operations. Test manually even if validation passes. Read the output—don't just trust that it looks right. If you can't explain what the code does or why it's correct, you don't own it yet. Verification catches AI hallucinations, schema violations, and logical errors that look syntactically perfect.
Understanding Beats Speed
AI can generate code faster than you can type, but speed without comprehension is technical debt. Read every line AI produces. Understand WHY it made each choice. Learn from the patterns. If Prof. Teeters asked you to explain this code, could you? If not, slow down and study it before accepting it.
Quality Requires Explicit Constraints
AI won't optimize quality unless you define it. Include quality constraints in prompts: "concise" (10-20 words), "specific" (reference actual practices), "actionable" (something learners can do), "plain language" (no jargon). Generic prompts produce generic output. Professional prompts produce professional code.
You Own Every Line of Code
Whether you typed it or AI generated it, you're responsible for quality, correctness, and schema compliance. Blame doesn't transfer to the AI when validation fails. Your name goes on the commit message. Own the code by verifying it, understanding it, and being willing to explain every decision it represents.
Using VS Code Source Control for Your First Commit
After creating quote-hap.json with AI and verifying it passed validation, I needed to commit my work to Git and push it to GitHub. Prof. Teeters showed me how to use VS Code's Source Control panel—no terminal commands needed.
Step 1: Open Source Control Panel
In VS Code's sidebar, I clicked the Source Control icon (looks like a branching tree). This panel shows all files that changed since my last commit. I saw data/quote-hap.json listed with a "U" (untracked file).
Step 2: Stage Your Changes
I clicked the + button next to quote-hap.json to stage it. Staging means "include this file in my next commit." The file moved from "Changes" to "Staged Changes"—ready to be committed.
Step 3: Write a Commit Message
At the top of the Source Control panel, there's a text box for commit messages. I typed:
Add HAP's computational thinking quotes
Created quote-hap.json with 8 developer wisdom quotes about
lessons from Stations 1-5. All quotes generated with AI assistance,
verified for schema compliance, and validated with npm run validate. Good commit messages explain WHAT changed and WHY. Not just "added file" but what the file contains and how you created it.
Step 4: Commit Your Changes
I clicked the ✓ Commit button. VS Code saved my changes to Git's history locally. Now my commit exists on my computer, but GitHub doesn't know about it yet.
Step 5: Push to GitHub
At the bottom of the Source Control panel, I clicked Sync Changes (or the cloud upload icon). This pushed my local commit to my fork on GitHub. Now my contribution is visible online.
🟠 What I learned about Git workflow:
Stage → Commit → Push is the pattern. VS Code's GUI makes it visual: you SEE which files changed, you CHOOSE which to include, you DESCRIBE what you did, then you SYNC to GitHub. No memorizing terminal commands. The Source Control panel shows your workflow state at every step.
Opening Your First Pull Request
After pushing quote-hap.json to my fork, I needed to contribute it back to the original Magic Quotes repository. That's what pull requests (PRs) do—they ask the project maintainer to "pull" your changes into the main project.
Step 1: Visit Your Fork on GitHub
I opened my browser and navigated to my fork: github.com/[my-username]/magic-quotes. GitHub showed a banner: "This branch is 1 commit ahead of cynthiateeters:main" with a Contribute button.
Step 2: Click "Open Pull Request"
I clicked Contribute → Open pull request. GitHub showed me a comparison: what changed between the original repository and my fork. I saw my new file: data/quote-hap.json.
Step 3: Write a PR Description
GitHub asked for a title and description. I wrote:
Add HAP's computational thinking quotes
## Summary
Added quote-hap.json with 8 developer wisdom quotes about computational thinking.
## Content
- 8 quotes covering lessons from Stations 1-5
- Category: developer-wisdom
- Tone: reflective and encouraging
- Source: ai-assisted (generated with GitHub Copilot, manually verified)
## Testing
- ✅ Passed `npm run validate`
- ✅ Quotes display correctly on `npm run dev`
- ✅ Schema compliance verified
- ✅ All quotes on-theme for computational thinking Step 4: Submit the PR
I clicked Create pull request. GitHub created PR #[number] and notified the project maintainer. My contribution is now visible to the community, waiting for review.
🟠 HAP's First Contribution Moment
Seeing my PR appear on GitHub felt incredible. I'd gone from thinking computationally (Stations 1-4) to setting up a project (Station 5) to making a real contribution (Station 6). This wasn't a tutorial exercise—it was a live website that other people use. My quotes would appear on magic-quotes.netlify.app if the PR got merged.
Prof. Teeters told me: "This is how developers grow. You contribute small pieces to real projects. You learn from code review. You build confidence through practice." That's what Week 1 taught me—not just how to think, but how to participate.
When NOT to Use AI
AI is powerful for generating code, but it's not always the right tool. Understanding when to work manually is just as important as knowing when to use AI. Prof. Teeters taught me these three scenarios where manual work beats AI speed.
When You're Learning New Concepts
Manual work builds understanding. If this is your first time working with JSON schemas, Git workflows, or project structures, type the code yourself. AI generates correct output but skips the learning process. You learn by doing, by making mistakes, by debugging your own errors. Use AI after you understand the fundamentals, not before.
When the Task Requires Judgment
AI can't make quality decisions for you. Choosing WHICH computational thinking lessons matter most, WHICH quotes represent your learning journey, WHICH tone fits your voice—these are judgment calls. AI generates plausible options, but you decide what's good. Don't outsource the thinking that makes your work meaningful.
When Verification Takes Longer Than Writing
Sometimes manual is faster. For simple tasks (creating one quote, fixing a typo, adjusting a field), typing it yourself takes 30 seconds. Crafting a detailed AI prompt, reviewing output, and verifying correctness takes 2 minutes. Use AI for repetitive tasks (generating 10 similar quotes), not one-off edits.
🟠 Prof. Teeters' Warning About AI and Learning
"AI should accelerate what you already understand, not replace the learning process. If you use AI before you understand the problem, you're accepting solutions instead of developing judgment."
She told me this after I tried to use AI other purposes—before I understood computational thinking. The AI-generated answers were correct, but I didn't learn anything. I had to go back and work through problems manually to build real understanding.
Learning Objectives Checklist
Congratulations on completing all six stations of HAP's Learning Lab! Before you finish, verify you've mastered the principles of responsible AI use for computational thinking:
Understanding AI Capabilities
- I understand what AI excels at (pattern recognition, syntax, speed) and where it struggles (context, judgment, validation)
- I know AI needs explicit requirements in every prompt—it doesn't remember context or read schema files automatically
- I can identify when manual work is better than AI assistance
Prompt Engineering Skills
- I can write detailed prompts with schema requirements, quality constraints, and thematic guidance
- I understand that vague prompts produce vague output; detailed prompts produce professional code
- I know how to iterate on prompts when initial output doesn't meet requirements
Verification Workflow
- I run
npm run validatebefore every Git commit to verify schema compliance - I manually review AI-generated code even when validation passes
- I understand that verification is my responsibility, not AI's
Professional Responsibility
- I own every line of code I commit, whether I wrote it or AI generated it
- I can explain what my code does and why it's correct
- I understand that understanding beats speed—learning matters more than shipping fast
Prof. Teeters on Completing All Six Stations
"HAP, you've completed a real learning journey. You started by thinking logically, learned to communicate your logic, debugged your reasoning, applied computational thinking, practiced professional workflows, and contributed to a real project—all while learning to use AI responsibly. That's what it means to become a developer."
"Now you have been introduced to the foundations. Next you will go onto building. But you're ready now because you understand HOW to learn, not just WHAT to memorize. Keep asking questions. Keep verifying. Keep owning your work. That's the path forward." 🟠