One
prompt. One HTML file. A study app that might just save the day.
My wife is studying for a professional business exam — one of those
courses where passing isn’t enough. You have to hit a specific
percentage to progress, and she’s come up short twice. Not because she
doesn’t understand the material — she does. The problem is the exam uses
different language from the mock papers she’s been studying. She knows
the concepts, but when the question is phrased a different way, she
second-guesses herself.
I decided to try something. I’d been using Claude (Anthropic’s AI)
for various projects, and I wanted to see if it could help build
something genuinely useful for her. What happened next was more useful
than I expected.
Where it started: boring data
The course provides practice mock exams as web pages — I’d saved them
locally so they’d be easy to work with. Two full mocks, 60 questions
each. Not the most exciting format to study from. The first thing I
needed was to get the content out of those files in a readable form.
I asked Claude to extract all the questions and answers, and to flag
which answers were correct. The scoring was embedded in the HTML: 1/1
meant the selected answer was right, 0/1 meant it was wrong (but we
wouldn’t know what the correct answer actually was from that alone —
just that it wasn’t what my wife chose).
Claude read through both files, parsed the HTML, and produced clean
markdown documents — all questions listed, correct answers flagged, and
a summary table at the end showing exactly which ones she’d got wrong
and what the right answers were.
That part took a few back-and-forth messages to get right. HTML
parsing has edge cases. But it got there.
Then I wrote one prompt
Once the data was extracted, I explained the actual problem:
“My wife hasn’t hit the required percentage to progress — twice now.
She feels the questions in the exam don’t use the same language as the
mocks. Can you perhaps think of ways that we can help her test her
understanding to pass the exam. The exam is a 60 question multiple
choice. If you can create an interactive monolithic document to test her
knowledge that would be great. It would be useful if we can generate
more random mock tests in future too.”
One paragraph. That’s it.
Claude came back with a plan and then built it: a single
self-contained HTML file with everything baked in. No server required,
no installation, no dependencies. You open it in a browser and it works
— on a laptop, on a phone, on a tablet.
What it actually built
The app has four modes:
Mock Exam mode — 60 questions, randomised each time,
with a timer running. No feedback until the end. This mimics actual exam
conditions so she isn’t training herself to expect instant answers.
Practice mode — choose a specific topic, work
through questions at your own pace, and get an explanation after each
answer. Wrong answers don’t just say “incorrect” — they explain
why.
Weak Areas mode — this one is particularly clever.
It weights questions towards the topics where my wife scored poorly
across both mocks: economies of scale, inflation types, macro policy,
types of business integration, and the business cycle. She can drill
into the exact things she’s struggling with.
Glossary flashcards — I’d also supplied a business
environment glossary (58 terms). Claude turned these into flip cards,
shuffled each time. Front shows the term, flip it to see the
definition.
Here’s the part that matters most: every single question in the app
was deliberately rephrased. Claude didn’t just copy the questions from
the mocks. It rewrote them to test the same knowledge using different
language. So instead of “which of the following indicates economies of
scale?” you might get “what does it mean when average total costs fall
as output rises?” — same concept, different words.
That’s the whole point. If my wife can answer the question regardless
of how it’s phrased, she actually understands it.
Why this matters beyond
my wife’s exam
I’ve been using AI tools for a few years now. I work in web hosting
and spend a lot of time with code, servers, and technical problems. But
this was different — this was an AI solving a personal, human problem in
a practical, immediately useful way.
The thing that stuck with me: the actual “magic” happened in one
prompt. All the back-and-forth before that was just getting the data
into a usable format. The moment I described what the real problem
was — not “build me a quiz app” but “she understands the material
but gets thrown by different phrasing” — Claude built something that
addressed the root cause.
That’s the skill with these tools. Not just knowing how to ask, but
knowing what you’re actually trying to solve.
The file works offline. It shuffles differently every time. It covers
120+ questions. It explains wrong answers. It runs on any device. And it
took one paragraph to describe what I needed.
I’m not saying AI replaces good teaching or genuine hard work. She
still has to sit there and actually study. But having a tool that
generates a fresh 60-question mock whenever she wants one, tuned to the
areas she struggles with, and using different language every time —
that’s something that genuinely didn’t exist for her before last
week.
She’s got another shot at the exam coming up. I think she’ll pass
this time.
If you want to try something similar for your own study
materials, the approach is straightforward: get your questions into any
readable format, feed them to Claude with a clear description of the
problem you’re trying to solve, and ask for a single-file HTML app. The
“monolithic document” framing is the key — no servers, no frameworks, no
setup. Just one file that opens anywhere.


