Looking Back at How I Learned
Six stations. Six different DOM concepts. Dozens of mistakes. And somewhere along the way, I stopped being someone who accepted whatever the AI generated and started being someone who owned the code.
When I look back at my journey through Stations 1 through 5, the technical skills matter — I can find elements, read their content, change their styles, and build new nodes from scratch. But the most important things I learned weren't DOM methods.
They were about how I learned. Every station involved AI in some way. I asked it questions. I trusted its answers. I got burned by its answers. I figured out when to push back, when to lean in, and when to bring my own defensive instincts to the table.
This is about the framework I built — sometimes painfully — for learning WITH AI.
When the AI Uses Outdated Code
Back in Station 2, I asked the AI to help me access an element on my Robot ID Card. It gave me
getElementById — which works, but it's the old way. I almost moved on. Then I connected
the MDN MCP server and asked six specific questions about how querySelector compared to
getElementById.
What I found changed everything. The MDN documentation showed me that
querySelector uses the same CSS selector syntax I was already learning, that it's more
flexible, and that it's the modern standard. The AI had given me working code, but it wasn't the
best code.
That experience taught me a pattern I've used ever since:
The Pattern: Notice, Stop, Interrogate, Understand, Codify
I noticed something unfamiliar. I stopped instead of accepting it. I interrogated the source documentation. I understood the difference. And I codified what I learned into a rule I could follow going forward.
When the AI generates code with an unfamiliar method, or someone flags something, don't accept it and don't swap it. Stop and understand WHY.
When the AI Explains Solid Fundamentals
Station 3 was different. I needed help understanding CSS selectors — how classes, IDs, and element selectors work together when targeting DOM elements. I asked the AI, and it walked me through the syntax clearly and accurately.
This time, I didn't push back. I didn't connect to MDN to double-check every detail. And that was the right call. CSS selector syntax is stable, well-documented, fundamental knowledge. The AI wasn't guessing or using outdated patterns — it was explaining something that hasn't changed in years and won't change tomorrow.
Pushing back isn't the default. It's the response to a specific signal — an unfamiliar method, a suspicious pattern, a claim that doesn't match what I've seen before. When none of those signals are present, pushing back wastes time and builds a habit of distrust that makes learning harder.
The Lesson
Not every AI answer needs pushback. When the knowledge is stable and there are no red flags — lean in.
When the AI Does Not Warn You
This one still bothers me. Across Stations 3 and 4, I was updating content on my Robot ID Card. The AI
generated code that used innerHTML to insert content into the page. It worked perfectly.
The AI never mentioned any problems. No warnings, no alternatives, no caveats.
Then I checked MDN. Right there in the documentation was a security warning about
cross-site scripting (XSS) attacks. If user input ever made it into an innerHTML call, an
attacker could inject malicious scripts into my page. The AI had given me code with a known security
vulnerability and said nothing.
That's when Grace Hopper's words hit me hardest:
Grace Hopper
"The AI did not warn you about innerHTML. MDN did. That is why you go to the
source. An AI assistant generates plausible code. A documentation source tells you what that
code actually does — including what can go wrong."
The defensive instinct — thinking about what could go wrong — that's something I have to bring myself. The AI optimizes for "does it work?" I have to also ask "could it break? Could it be exploited? What am I not seeing?"
Two MCP Servers, Two Jobs
One of the most practical things I learned was that different questions need different sources. I use two MCP servers, and they each have a specific job:
MDN MCP
Server: mcp.mdn.mozilla.net
Use for: Web platform questions — HTML elements, CSS properties,
DOM methods, JavaScript APIs. This is where I found the innerHTML security
warning. This is where I compared querySelector to getElementById.
Anything that's part of the browser itself lives here.
Context7
Use for: Library and framework questions — Vite configuration, Astro components, React hooks, framework-specific APIs. When the AI gave me Vite 5 instructions but I was using Vite 7, Context7 had the versioned docs that showed me the correct syntax.
The Rule
Web platform question? MDN MCP. Library or framework question? Context7. Knowing which source to reach for is half the battle. The other half is actually reaching for it instead of trusting the AI's first answer.
The Framework I Built
Across five stations, a pattern kept emerging. Every time I handled an AI interaction well, I followed the same steps. Every time I got burned, I'd skipped one of them. Here's the framework:
NOTICE → STOP → INTERROGATE → UNDERSTAND → CODIFY
- NOTICE — Something is unfamiliar, flagged, or feels off. A method I haven't seen. A pattern that doesn't match what I've learned. A missing warning.
- STOP — Don't accept it. Don't swap it out for something else. Don't move on.
- INTERROGATE — Go to the source. Connect the MCP server. Read the actual documentation. Ask specific questions.
- UNDERSTAND — Know why the code works the way it does. Know what the tradeoffs are. Know what could go wrong.
- CODIFY — Turn what I learned into a rule I can follow next time without having to rediscover it.
The meta-lesson is about when each response is appropriate:
Push Back
When the AI uses unfamiliar methods, when someone flags a problem, when the code doesn't match what documentation says. This is the full framework: NOTICE through CODIFY.
Lean In
When the knowledge is stable and fundamental, when there are no red flags, when the AI is explaining well-established concepts. Trust — but stay alert for signals.
Think Defensively
Always. Even when the code works. Especially when the AI doesn't mention any problems. Ask: "What could go wrong? What isn't the AI telling me? What would MDN say about this?"
HAP's Confession
The innerText Incident
The AI told me to use innerText to read element content, and I trusted it without
checking. It worked in my tests, so I moved on. Later I discovered that innerText
triggers a reflow — the browser has to recalculate the layout to figure out what's visually
rendered. For my little Robot ID Card, it didn't matter. But if I'd built that habit into a larger
project with hundreds of elements, the performance cost would have added up. MDN would have told
me about textContent as the better default.
Quick Reference
Push Back When Flagged
When someone flags an issue with AI-generated code, or when the AI uses a method I haven't seen before, I don't accept it and I don't swap it. I stop, connect the MCP server, read the documentation, and understand why before I decide what to do.
Lean In on Stable Knowledge
Not everything needs pushback. When the AI explains fundamental, stable concepts — CSS selector syntax, basic JavaScript operators, HTML element structure — and there are no red flags, I lean in and learn. Unnecessary skepticism slows me down and builds distrust.
Bring Your Own Defensive Instinct
The AI won't warn me about security vulnerabilities, performance costs, or edge cases unless I ask. The defensive instinct — "what could go wrong?" — is something I have to bring myself. Always check the documentation, especially when the code works without apparent issues.
MDN for Web Platform, Context7 for Libraries
Web platform questions — HTML, CSS, DOM, JavaScript — go to MDN MCP. Library and framework questions — Vite, Astro, React — go to Context7. Knowing which source to reach for saves time and gets me accurate, versioned answers.
Ask Specific Questions
"Explain the DOM" gets a useless wall of text. "How does querySelector find an element by its class name?" gets a focused, useful answer. The quality of what I get from AI is directly proportional to the specificity of what I ask.
Lab Complete
Six Stations Complete
Six stations ago, I thought the page was done once the HTML loaded. Now I know the page is
never done — it's a living document that JavaScript can find, read, change, and build upon.
I learned to access elements with querySelector, read their content with
textContent, change their styles and attributes, and create entirely new nodes
from scratch.
But the bigger lesson was about the process. I started as someone who accepted whatever the AI generated. Now I push back when something looks wrong, lean in when the knowledge is solid, and bring my own defensive instincts to every interaction. I own my code — every line of it, whether I wrote it or the AI did.
That's growth. 🟠