Vendor systems

Testing software from a vendor? Why trusting them isn't enough

If your company buys software from another vendor — whether it is a CRM, billing, approval system, reservation portal or internal integration — you have one often-overlooked problem: their QA reports are not independent. The team that develops also tests. The outcome is measured by those financially dependent on it.

In regulated industries (banks, insurers, public sector) auditors have long named this a conflict of interest. In the commercial sphere it is still trusted too much. And when an incident happens, the buyer pays — of course — not the vendor.

Why vendor QA reports aren't enough

  • Time competition — the vendor has a fixed project budget. Every QA hour is an hour not spent on 'new features'. When things slip, QA is the easiest thing to cut.
  • Home-grown knowledge — the person who codes also tests. They know the weak spots and subconsciously avoid them. An external fresh pair of eyes catches a bug in 5 minutes that the internal team does not see for 3 months.
  • Conflict of reporting — if internal QA finds a critical bug a week before release, who tells the client 'release is delayed'? The pressure to soften severity is real.
  • Edge scenarios — the vendor tests the happy path. You need to test your specific workflow: your integration, your data, your role matrix. Nobody else tests that.

What an 'independent test layer' is

Instead of relying only on what the vendor UAT reports say, you build your own test layer on top of their system. It:

  • Runs automatically before every deployment (not once per release)
  • Tests your use case, not generic functionality
  • Reports results to you, not to the vendor
  • Is your asset — the vendor has no access
  • In disputes it gives you auditable evidence

Real example — application approval in energy

A vendor built an energy company a portal for approving grid connection applications. After 18 months in production it turned out that 10% of connection applications get lost when transitioning between statuses. The vendor did not see it — their test data did not contain this edge combination.

Keď sme postavili an independent test layer:

  • Cypress + Playwright scenarios covering 40+ combinations of customer type, location, capacity
  • Test data in Excel — the business team adds scenarios without IT
  • Every night at 2:00 the whole suite runs against the staging environment
  • The delta between expected and actual behaviour is reported directly to our client, not to the vendor

In the first month the layer caught 7 regressions, of which 3 were critical and unknown to the vendor. The project also became leverage in contract negotiations — the client had auditable evidence that shifted their position in the next SLA round.

Technical approach

The independent layer requires neither DB access nor vendor cooperation. Typical stack:

  • UI tests (Playwright / Cypress) — simulate real users on the vendor's production UI
  • API tests (Postman / REST Assured) — if the vendor exposes an API, we test the contract directly
  • Integration tests — test that your system talks to the vendor's correctly (what if their API returns null instead of an empty array?)
  • Behaviour tests (Locust / JMeter) — load scenarios at peaks (e.g. billing day 1)

Everything runs in your CI/CD pipeline. Nothing is deployed to the vendor's infrastructure. If the vendor changes the UI or API, your layer catches it as a regression and notifies you.

What we AREN'T — and why we'll flag it

Some things independent testing objectively cannot do and clients occasionally ask us about:

  • Vendor's internal code — we have no access and do not want it. We test behaviour, not implementation.
  • Data integrity without DB access — if the vendor does not export state to us, we cannot verify data is consistent 'inside'. We can verify only what is visible via UI/API.
  • 100% coverage guarantee — even with an independent layer, bugs can appear in scenarios nobody documented. But we reduce the quantity by an order of magnitude.

How much it costs

For a typical system (15–30 critical flows): €3–6,000 one-off setup, then maintenance only if you want it. Compare with the cost of a single production incident — often tens or hundreds of thousands — and the ROI is clear at the first caught bug.

By sector we do this most often for:

  • Energy (schvaľovacie workflow nad dodávateľským SW)
  • Public sector (portals on top of outsourced systems)
  • Banking (testovanie integrácií s 3rd-party payment / scoring)
  • Telecom (billing flows on top of the vendor engine)

Need to vet a vendor from a budget angle? Concrete prices for Cypress/Playwright framework, CI/CD, and other components are in How much does test automation cost in 2026.


If you don't want to speculate, napíš nám — we'll show you a concrete budget and timeline for your project. No hourly rates. No retainer. No sales rituals.