December 10, 2025
Migrations are high-stakes moments for web performance. Whether you're moving to a new front-end framework, switching backends, or undertaking a full platform overhaul, the risk of performance regression is significant - and the cost of getting it wrong can be enormous.
This guide covers what metrics to collect before, during, and after a migration, how to establish meaningful baselines, and how to protect your business from performance degradation.
Every second of additional page load time costs you money. Amazon famously found that each second of delay results in approximately 10% revenue loss. During a migration, it's easy to focus on feature parity, timelines, and functionality while overlooking the one thing that affects every single user interaction: speed.
We've seen migrations that looked successful on paper - all features working, all tests passing-only to discover weeks later that conversion rates had dropped, bounce rates had climbed, and the "successful" migration was quietly bleeding revenue.
Before touching any code, you need a comprehensive performance baseline. This isn't just about having numbers to compare against - it's about understanding why your current site performs the way it does.
Start with what Google already collects via the Chrome User Experience Report (CrUX):
CrUX data gives you the 75th percentile of real user experiences, which is what Google uses for search ranking signals. However, CrUX alone isn't granular enough for migration work.
For a front-end migration (especially on existing backend APIs), collect these additional metrics:
Dig deeper into what's actually being delivered to users:
Performance migrations can have devastating SEO consequences if not monitored properly:
A common migration failure is omitting critical meta data that was previously handled by the old system. Audit this thoroughly before launch.
This is where many migrations fall apart. The development team tests on fast MacBooks with fibre connections, everything looks great, and then real users on mid-range Android phones with spotty 4G connections have a completely different experience.
Before you go live, you need to systematically test across the range of devices and network conditions your actual users experience.
Your development machine is not representative of your users. A typical developer setup might have 8-core CPU with high single-thread performance, 16-32GB RAM, SSD storage, Gigabit ethernet or fast WiFi, and the latest browser version.
Meanwhile, a significant portion of your users are on budget or mid-range smartphones (2-4 year old devices), 4G connections with variable latency, shared network bandwidth, older browser versions, and limited device memory.
A page that renders in 1 second on your machine might take 8+ seconds on a real user's device. If you're not testing for this, you're flying blind.
Before you can test the new platform properly, you need to know what conditions to test against. Pull this data from your analytics:
Our RUM platform provides this breakdown automatically, showing you exactly what devices and network conditions your real users experience - segmented by Core Web Vitals performance. This tells you not just what devices people use, but which ones are struggling.
If you don't have RUM data, Google Analytics provides device categories and some network data. CrUX data also includes form factor breakdowns.
At an absolute minimum, every page should be tested with Chrome DevTools throttling enabled. This isn't perfect emulation, but it catches the most egregious issues.
To enable throttling: Open DevTools (F12 or Cmd+Option+I), go to the Performance tab or Network tab, and click the throttling dropdown (shows "No throttling" by default).
Recommended test profiles:
Download our service guide to understand how we can help you optimise your site speed
Important: CPU throttling and network throttling are configured in different places in DevTools. For realistic mobile testing, you need both: CPU throttling in Performance tab > Gear icon > CPU throttling, and Network throttling in Network tab > Throttling dropdown.
For each critical page type (homepage, product page, checkout, etc.), test with throttling enabled and record: LCP time, time to interactive, INP/responsiveness, visual stability, and JavaScript errors.
Compare these results against your baseline measurements from the current production site under the same throttled conditions. If the new platform is significantly slower under throttling, you have a problem that won't show up in your fast-connection testing.
DevTools throttling is a useful approximation, but it has limitations. For critical migrations, consider real device testing (keep a few older Android devices for manual testing) and cloud device labs like BrowserStack or LambdaTest.
This is a silent killer of migration testing. Your staging environment might look identical to production, but subtle infrastructure differences can mask serious performance problems that only appear after go-live.
Common staging vs production mismatches:
What to do about it: Document the known differences between staging and production infrastructure. For critical performance testing, consider testing against a production replica with realistic data volumes. If possible, run performance tests against production during low-traffic periods. Add latency simulation to staging database connections. Test with production-like data volumes.
The goal is to eliminate surprises. Every difference between staging and production is a potential source of performance issues that won't be caught until real users are affected.
Before go-live, ensure you've tested each critical page: Desktop with no throttling (developer baseline), Desktop with slow network (Fast 4G), Mobile emulation with no throttling, Mobile emulation with Fast 4G network + 4x CPU slowdown, Mobile emulation with Slow 4G network + 6x CPU slowdown, and Real device if available.
Document the results and compare against the same tests run on the current production site. Any significant regression should be investigated and resolved before launch.
Massive JavaScript bundles (acceptable on fast connections, crippling on slow ones), unoptimized images (large hero images that block LCP), render-blocking resources (CSS or JS that delays first paint), excessive API calls (multiple sequential requests that compound latency), heavy third-party scripts (analytics and marketing tags that compete for bandwidth), and missing loading states (UI that appears broken while waiting for data).
These issues are often invisible during normal development but become painfully obvious under throttled conditions.
If possible, run the old and new systems in parallel (A/B or canary deployment) and compare: Real user metrics between cohorts, conversion rates by platform version, error rates and JavaScript exceptions, and API response times.
Synthetic tests are useful, but nothing beats real user data. Our RUM platform is specifically built for this-focusing on Core Web Vitals and third-party script impact, which are the two areas most likely to regress during migrations. You'll see exactly how your new platform performs across different devices, connection speeds, and geographies, with granular visibility into which third-party scripts are affecting LCP, INP, and CLS.
This is particularly valuable during staged rollouts, where you can compare real user metrics between old and new versions in real-time.
Run consistent synthetic tests against both versions from the same locations: WebPageTest for detailed waterfall analysis, Lighthouse CI for automated regression detection, and custom scripts for critical user journeys.
Set explicit thresholds that will block deployment if exceeded: JavaScript bundle size limits, LCP targets by page type, and INP thresholds based on current baseline.
Compare real user metrics against baseline, monitor error rates and console errors, check Search Console for crawl errors or indexing issues, and validate third-party integrations are firing correctly.
CrUX data will begin reflecting the new experience, search rankings may fluctuate (monitor closely), compare conversion metrics against pre-migration baseline, and gather qualitative feedback from users and internal teams.
Establish new baselines for continuous monitoring, set up alerts for performance regressions, schedule regular audits of third-party script impact, and document lessons learned for future migrations.
For a front-end tech stack migration on existing backend APIs, here's the complete collection list:
Migrations fail when performance is treated as an afterthought. The teams that succeed are those that treat speed as a first-class requirement from day one - with clear baselines, explicit budgets, and contractual protections when vendors are involved.
If a product seems slow during demos, it will almost certainly be worse in production. If performance metrics aren't in your migration success criteria, add them now. And if you're working with external vendors on any part of the migration, ensure your contracts include measurable performance SLAs with remediation timelines and termination rights.
The cost of getting this wrong isn't just slower pages: it's lost revenue, damaged SEO rankings, frustrated users, and expensive remediation projects down the line.
Need help establishing performance baselines or monitoring your migration?Get in touch- we'd love to help.
Download our service guide to understand how
we can help you optimise your site speed