bloginnerellipses
Blog
  • performance

The Ultimate Guide to Managing Web Performance During Migrations

December 10, 2025

The Ultimate Guide to Managing Web Performance During Migrations

Migrations are high-stakes moments for web performance. Whether you're moving to a new front-end framework, switching backends, or undertaking a full platform overhaul, the risk of performance regression is significant - and the cost of getting it wrong can be enormous.

This guide covers what metrics to collect before, during, and after a migration, how to establish meaningful baselines, and how to protect your business from performance degradation.

Why Performance During Migrations Matters

Every second of additional page load time costs you money. Amazon famously found that each second of delay results in approximately 10% revenue loss. During a migration, it's easy to focus on feature parity, timelines, and functionality while overlooking the one thing that affects every single user interaction: speed.

We've seen migrations that looked successful on paper - all features working, all tests passing-only to discover weeks later that conversion rates had dropped, bounce rates had climbed, and the "successful" migration was quietly bleeding revenue.

Pre-Migration: Establishing Your Baseline

Before touching any code, you need a comprehensive performance baseline. This isn't just about having numbers to compare against - it's about understanding why your current site performs the way it does.

Core Web Vitals (Field Data)

Start with what Google already collects via the Chrome User Experience Report (CrUX):

  • LCP (Largest Contentful Paint) - How quickly your main content loads
  • INP (Interaction to Next Paint) - How responsive your site feels to user input
  • CLS (Cumulative Layout Shift) - Visual stability during load

CrUX data gives you the 75th percentile of real user experiences, which is what Google uses for search ranking signals. However, CrUX alone isn't granular enough for migration work.

Extended Performance Metrics

For a front-end migration (especially on existing backend APIs), collect these additional metrics:

  • TTFB (Time to First Byte) - Critical for understanding server/API response times. If you're keeping the same backend, TTFB should remain stable. Any regression here signals a problem with how the new front-end is requesting data.
  • FCP (First Contentful Paint) - The moment something first appears on screen. This is particularly important for perceived performance and is often where front-end framework choices show their impact.
  • DCL (DOMContentLoaded) - When the initial HTML document has been completely loaded and parsed. Useful for understanding how your JavaScript execution is affecting the critical path.

Asset-Level Metrics

Dig deeper into what's actually being delivered to users:

  • HTML document size - Bloated HTML (often from server-side rendering or excessive inline data) can significantly impact TTFB and parsing time.
  • CSS payload - Total size and, critically, how much is render-blocking. A new framework might ship significantly more CSS than your current solution.
  • First-party JavaScript - Bundle sizes for your own code. This is where framework migrations often cause the biggest regressions.
  • Initial image payload - What images load before user interaction? Are they appropriately sized and formatted?

Interaction & Animation Performance

  • Long Animation Frames (LoAF) - Identify JavaScript that blocks the main thread for extended periods. Pay special attention to libraries being replaced (e.g., moving from Slick carousel to CSS scroll-snap) to ensure the new solution actually performs better.
  • Third-party script impact - Document the current impact of analytics, marketing tags, chat widgets, etc. These need to be re-integrated in the new platform, and it's easy for them to become more impactful during migration.

SEO Baseline

Performance migrations can have devastating SEO consequences if not monitored properly:

  • Indexed page count - How many pages does Google currently have indexed?
  • Ranking positions - Track your key terms before migration
  • Core Web Vitals in Search Console - Your current pass/fail status by page type
  • Meta data completeness - Title tags, descriptions, canonical tags, structured data

A common migration failure is omitting critical meta data that was previously handled by the old system. Audit this thoroughly before launch.

Before Go-Live: Testing Across Devices and Network Conditions

This is where many migrations fall apart. The development team tests on fast MacBooks with fibre connections, everything looks great, and then real users on mid-range Android phones with spotty 4G connections have a completely different experience.

Before you go live, you need to systematically test across the range of devices and network conditions your actual users experience.

Why This Matters

Your development machine is not representative of your users. A typical developer setup might have 8-core CPU with high single-thread performance, 16-32GB RAM, SSD storage, Gigabit ethernet or fast WiFi, and the latest browser version.

Meanwhile, a significant portion of your users are on budget or mid-range smartphones (2-4 year old devices), 4G connections with variable latency, shared network bandwidth, older browser versions, and limited device memory.

A page that renders in 1 second on your machine might take 8+ seconds on a real user's device. If you're not testing for this, you're flying blind.

Establishing a Device/Network Baseline

Before you can test the new platform properly, you need to know what conditions to test against. Pull this data from your analytics:

  • Device breakdown: What percentage of users are on mobile vs desktop vs tablet? What are the most common device models? What's the distribution of device age/capability?
  • Network conditions: What's the effective connection type distribution (4G, 3G, slow 2G)? What are the typical RTT (round-trip time) values? What are the typical download speeds?
  • Geographic distribution: Where are your users located? Are there regions with notably worse connectivity?

Our RUM platform provides this breakdown automatically, showing you exactly what devices and network conditions your real users experience - segmented by Core Web Vitals performance. This tells you not just what devices people use, but which ones are struggling.

If you don't have RUM data, Google Analytics provides device categories and some network data. CrUX data also includes form factor breakdowns.

Chrome DevTools Throttling (Minimum Viable Testing)

At an absolute minimum, every page should be tested with Chrome DevTools throttling enabled. This isn't perfect emulation, but it catches the most egregious issues.

To enable throttling: Open DevTools (F12 or Cmd+Option+I), go to the Performance tab or Network tab, and click the throttling dropdown (shows "No throttling" by default).

Recommended test profiles:

Get the free Service Guide

Download our service guide to understand how we can help you optimise your site speed

  • Fast 4G: 4x CPU slowdown, 9 Mbps down, 1.5 Mbps up, 60ms RTT - Baseline mobile test
  • Slow 4G: 6x CPU slowdown, 1.6 Mbps down, 750 Kbps up, 150ms RTT - Stress test for poor connections
  • Low-end mobile: 6x CPU slowdown, Fast 4G network - Budget Android device simulation

Important: CPU throttling and network throttling are configured in different places in DevTools. For realistic mobile testing, you need both: CPU throttling in Performance tab > Gear icon > CPU throttling, and Network throttling in Network tab > Throttling dropdown.

What to Test

For each critical page type (homepage, product page, checkout, etc.), test with throttling enabled and record: LCP time, time to interactive, INP/responsiveness, visual stability, and JavaScript errors.

Compare these results against your baseline measurements from the current production site under the same throttled conditions. If the new platform is significantly slower under throttling, you have a problem that won't show up in your fast-connection testing.

Beyond DevTools: Real Device Testing

DevTools throttling is a useful approximation, but it has limitations. For critical migrations, consider real device testing (keep a few older Android devices for manual testing) and cloud device labs like BrowserStack or LambdaTest.

Make Sure Staging Actually Represents Production

This is a silent killer of migration testing. Your staging environment might look identical to production, but subtle infrastructure differences can mask serious performance problems that only appear after go-live.

Common staging vs production mismatches:

  • Database proximity: On production, your app server might be in a different data centre or region from your database, adding 20-50ms to every query. On staging, they're often on the same server or in the same rack.
  • Data volume: Staging databases typically have a fraction of production data. A product listing query that returns in 50ms against 10,000 products might take 500ms against 2 million products.
  • Caching state: Production has warm caches from real traffic. Staging caches are often cold or sparsely populated.
  • CDN configuration: Staging might bypass the CDN entirely, or use a different CDN configuration.
  • Third-party services: Some third-party scripts behave differently in staging environments.
  • Traffic load: Staging has no concurrent users competing for resources.

What to do about it: Document the known differences between staging and production infrastructure. For critical performance testing, consider testing against a production replica with realistic data volumes. If possible, run performance tests against production during low-traffic periods. Add latency simulation to staging database connections. Test with production-like data volumes.

The goal is to eliminate surprises. Every difference between staging and production is a potential source of performance issues that won't be caught until real users are affected.

Creating a Pre-Launch Checklist

Before go-live, ensure you've tested each critical page: Desktop with no throttling (developer baseline), Desktop with slow network (Fast 4G), Mobile emulation with no throttling, Mobile emulation with Fast 4G network + 4x CPU slowdown, Mobile emulation with Slow 4G network + 6x CPU slowdown, and Real device if available.

Document the results and compare against the same tests run on the current production site. Any significant regression should be investigated and resolved before launch.

Common Issues Revealed by Throttled Testing

Massive JavaScript bundles (acceptable on fast connections, crippling on slow ones), unoptimized images (large hero images that block LCP), render-blocking resources (CSS or JS that delays first paint), excessive API calls (multiple sequential requests that compound latency), heavy third-party scripts (analytics and marketing tags that compete for bandwidth), and missing loading states (UI that appears broken while waiting for data).

These issues are often invisible during normal development but become painfully obvious under throttled conditions.

During Migration: What to Monitor

Staged Rollout Monitoring

If possible, run the old and new systems in parallel (A/B or canary deployment) and compare: Real user metrics between cohorts, conversion rates by platform version, error rates and JavaScript exceptions, and API response times.

Real User Monitoring (RUM)

Synthetic tests are useful, but nothing beats real user data. Our RUM platform is specifically built for this-focusing on Core Web Vitals and third-party script impact, which are the two areas most likely to regress during migrations. You'll see exactly how your new platform performs across different devices, connection speeds, and geographies, with granular visibility into which third-party scripts are affecting LCP, INP, and CLS.

This is particularly valuable during staged rollouts, where you can compare real user metrics between old and new versions in real-time.

Synthetic Testing

Run consistent synthetic tests against both versions from the same locations: WebPageTest for detailed waterfall analysis, Lighthouse CI for automated regression detection, and custom scripts for critical user journeys.

Performance Budgets

Set explicit thresholds that will block deployment if exceeded: JavaScript bundle size limits, LCP targets by page type, and INP thresholds based on current baseline.

Post-Migration: Validating Success

Immediate (First 48 Hours)

Compare real user metrics against baseline, monitor error rates and console errors, check Search Console for crawl errors or indexing issues, and validate third-party integrations are firing correctly.

Short-term (First 2-4 Weeks)

CrUX data will begin reflecting the new experience, search rankings may fluctuate (monitor closely), compare conversion metrics against pre-migration baseline, and gather qualitative feedback from users and internal teams.

Ongoing

Establish new baselines for continuous monitoring, set up alerts for performance regressions, schedule regular audits of third-party script impact, and document lessons learned for future migrations.

The Metrics Checklist

For a front-end tech stack migration on existing backend APIs, here's the complete collection list:

  • Core Web Vitals (Field): LCP (p75 from CrUX), INP (p75 from CrUX), CLS (p75 from CrUX)
  • Extended Timing Metrics: TTFB, FCP, DCL (DOMContentLoaded)
  • Asset Analysis: HTML document size, Total CSS size, Render-blocking CSS size, First-party JS bundle size, Third-party JS size, Initial image payload
  • Runtime Performance: Long Animation Frames data, Main thread blocking time, Library-specific performance (carousels, modals, etc.)
  • SEO Health: Indexed page count, Key term rankings, Meta data audit (titles, descriptions, canonicals), Structured data validation

Final Thoughts

Migrations fail when performance is treated as an afterthought. The teams that succeed are those that treat speed as a first-class requirement from day one - with clear baselines, explicit budgets, and contractual protections when vendors are involved.

If a product seems slow during demos, it will almost certainly be worse in production. If performance metrics aren't in your migration success criteria, add them now. And if you're working with external vendors on any part of the migration, ensure your contracts include measurable performance SLAs with remediation timelines and termination rights.

The cost of getting this wrong isn't just slower pages: it's lost revenue, damaged SEO rankings, frustrated users, and expensive remediation projects down the line.

Need help establishing performance baselines or monitoring your migration?Get in touch- we'd love to help.

newsellipse
back

Ready to optimise
your site speed?

Download our service guide to understand how
we can help you optimise your site speed

Download Service Guide