From user story to Cypress test in 60 seconds
The average QA engineer writes a Cypress test in 2–4 hours. With a well-built AI workflow it takes 60–90 seconds— not because AI is faster than a tester, but because it removes the repetitive boilerplate work.
This article is a hands-on guide. We use Claude Code, but the principle works with Cursor or Copilot Chat too.
Prerequisites
- Existing Cypress + TypeScript project
- Page Object pattern (Claude follows it)
CLAUDE.mdwith project conventions (see the previous article)- A clearly written user story
Step 1: Prepare the user story
A bad user story = a bad test. This is the format that works:
AKO registrovaný používateľ CHCEM pridať produkt do obľúbených ABY som ho vedel nájsť pri ďalšej návšteve. Acceptance: - Klik na srdiečko v produkt detail pridá produkt do obľúbených - Návrat na /favorites zobrazí pridaný produkt - Klik znovu na srdiečko produkt odstráni - Neprihlásený používateľ dostane modal "prihláste sa"
When AI has concrete acceptance criteria it can write 4 test cases — not 1 generic one.
Step 2: Prompt
> Napíš Cypress test pre túto user story.
[paste user story sem]
Context:
- Test go do cypress/e2e/favorites/
- Použij existujúci productPage a favoritesPage (ak existujú, inak
ich vygeneruj v cypress/support/pages/)
- Testuj aj scenar neprihláseného používateľa (očakávaj modal s
data-qa="auth-required-modal")
- Pre login použij custom command cy.loginAs('standard')
Step 3: Validate the output (60 seconds)
AI should be fast, not limitlessly trusting. A junior reviewer can do these 3 checks in a minute:
- Run it —
npx cypress run --spec 'cypress/e2e/favorites/*.spec.ts'. If it fails on a selector, AI hallucinated. - Acceptance coverage — open the spec and count: are there 4
it()blocks for 4 acceptance criteria? If not, ask again. - Negative cases — does it test the logged-out scenario? Does it test double-click (removal)?
Common AI mistakes when generating tests
- Invented selectors — AI sees
data-qa="login-btn"in a previous test and assumes the product page has the same pattern. Validation = actually open the page and look at the DOM. - Missing
beforeEach()— AI sometimes omits cleanup, which leads to flaky tests under parallel runs. - Hardcoded
cy.wait(2000)— a classic. Even with a good CLAUDE.md it sometimes slips in. Stop & fix. - Tested happy path only — if acceptance includes a negative branch, AI sometimes omits it. Ask for it explicitly.
A meta-prompt that rules out 80% of issues
Paste this at the start of every generation prompt:
Pravidlá: - Pred napísaním testu otvor relevantné page objects a over selektory - Žiadne cy.wait(ms). Používaj cy.intercept() alebo .should() - Pre každé acceptance kritérium 1 it() blok - Zahŕň negatívne scenáre, ak sa logicky ponúkajú - Použi existujúce factories a fixtures namiesto inline dát - Ak niečo nevieš z kódu, pýtaj sa — nehalucinuj
The difference in output quality is noticeable.
When AI isn’t enough
If the user story describes a flow with critical business logic (payments, regulation, GDPR), don't use AI as the author of the test — use it as a helper. Write the skeleton yourself, AI fills in Page Object calls and assertions.
Want the same approach at your company? Get in touch — dohodneme 30-minute discovery call.