Why Most Go-Live Analysis Fails

Here's how go-live analysis typically gets done: someone puts a date in a spreadsheet, an analyst manually filters data before and after that date, and a report gets sent to leadership comparing two averages. Done.

The problem is that this approach is fragile in three ways. First, the "official" go-live date is almost never when the new process actually took hold in the data — there's usually a ramp-up period, parallel runs, or delayed adoption. Second, manual date filtering means the analysis can't be replicated when the data updates. Third, when you have dozens of departments, product lines, or sub-processes, manually assigning go-live dates doesn't scale.

"The go-live date is when someone flipped a switch. The inflection point is when behavior actually changed. These are almost never the same day."

The approach in this article solves all three problems. We let the data tell us when something changed — by detecting the inflection point in transaction volume automatically — and then we build the comparison on top of that detected date.

The Dataset

We're working with a sales dataset: daily transactions across multiple departments and sub-departments, spanning a period before and after a new process was rolled out. The exact domain doesn't matter — this works equally well for lab turnaround times, order processing, patient volumes, or inventory movement.

The data structure looks like this:

sample_data.csv
date,department,sub_department,transaction_id,value
2024-01-03,Sales,Inside Sales,TXN-0001,142.50
2024-01-03,Sales,Inside Sales,TXN-0002,89.00
2024-01-03,Operations,Fulfillment,TXN-0003,215.75
2024-01-04,Sales,Field Sales,TXN-0004,320.00
...

That's it. No manual "pre" or "post" flags. No go-live date column. We're going to derive that ourselves.

Step 1 — Detect the Go-Live Date Automatically

The core insight: when a new process rolls out, transaction volume for the affected departments typically shows a structural shift. We find that shift using a simple but effective method — we look for the largest change in rolling 7-day average volume across the time series.

  1. Compute rolling 7-day average volume by sub-department

    Smooth out day-of-week effects and one-off spikes before looking for structural changes.

  2. Calculate the day-over-day change in that rolling average

    The go-live date candidate is the point of maximum positive shift in rolling volume.

  3. Apply a minimum volume threshold

    Ignore periods with fewer than 10 transactions — these are noise, not signals.

  4. Set a 7-day buffer on each side

    Exclude the transition week from both pre and post windows to avoid contamination.

Python · Go-Live Detection
def detect_golive_date(data, dept, sub_dept, min_volume=10):
    """
    Detect go-live date by finding the largest structural
    shift in rolling 7-day transaction volume.
    """
    subset = data[
        (data['department'] == dept) &
        (data['sub_department'] == sub_dept)
    ].copy()

    # Daily volume
    daily = subset.groupby('date').size().reset_index(name='volume')
    daily = daily.sort_values('date')
    daily['rolling_avg'] = daily['volume'].rolling(7, min_periods=3).mean()
    daily['rolling_change'] = daily['rolling_avg'].diff()

    # Filter to periods with meaningful volume
    candidates = daily[daily['rolling_avg'] >= min_volume]

    if len(candidates) == 0:
        return None

    # The go-live date is the largest positive shift
    golive_idx = candidates['rolling_change'].idxmax()
    golive_date = daily.loc[golive_idx, 'date']

    return golive_date
💡

Why rolling 7-day? Most business data has strong day-of-week patterns. A Monday might naturally have 3× the volume of a Sunday. Rolling averages smooth this out so we're detecting real trend shifts, not just "it's Monday."

Step 2 — Build the Pre/Post Windows

Once we have a detected go-live date for each sub-department, we assign records to Pre or Post windows. The key rule: we exclude a buffer period around the go-live date because that transition window is messy — systems are in flux, staff are learning, and data quality is often inconsistent.

Python · Period Assignment
def assign_period(row, golive_map, buffer_days=7):
    key = (row['department'], row['sub_department'])
    golive = golive_map.get(key)

    if golive is None:
        return 'Unknown'

    buffer = pd.Timedelta(days=buffer_days)
    record_date = row['date']

    if record_date < (golive - buffer):
        return 'Pre-GoLive'
    elif record_date > (golive + buffer):
        return 'Post-GoLive'
    else:
        return 'Transition'  # excluded from analysis

Step 3 — Measure the Impact

With Pre and Post windows cleanly assigned, the comparison metrics write themselves. For each department and sub-department, we compute mean value, volume, and the delta — with enough records to be statistically meaningful.

Example Output — Avg Transaction Value by Sub-Department

Inside
Sales
Field
Sales
E-Commerce
Fulfillment
Returns
Pre Go-Live
Post Go-Live
−18%
↓ Improved
Avg Processing Time
+31%
↑ Increased
Volume Handled
4.2
days faster
Mean Cycle Time

Step 4 — Generate the HTML Report

The final output is a self-contained interactive HTML file you can email to stakeholders. No Power BI license required, no Tableau Server, no sharing permissions to configure. Just a file.

The report is structured hierarchically: Department → Sub-Department → Metric trend. Each level is collapsible, and every chart includes a statistical control chart overlay (mean ± 2σ) so you can see not just whether the average changed but whether the process stabilized after go-live.

The Go-Live line is drawn automatically. The vertical dashed line on every trend chart is placed at the detected go-live date — not a hardcoded value. If you update the data and re-run the notebook, it recalculates.

What You Get

This analysis comes in three tiers, depending on how deep you want to go.

Free

$0

The Google Sheets template with pre-populated fake data so you can see the structure and paste in your own. Perfect for understanding the approach before building anything.

Get the Template

Live Demo

See It First

View a fully interactive sample report generated from real fake data — exactly what you'll get when you run the notebook on your own data.

View Sample Report →

Done For You

$5k

I build it for you — connected to your actual data source, customized for your departments and metrics, with a walkthrough so your team can maintain it going forward.

Book a Free 30-Min Call →
⚠️

Not sure which tier is right for you? Start with the free template. If you find yourself wanting to automate the go-live detection and build the HTML output, the notebook pays for itself the first time you use it. If you'd rather not deal with any of it — that's what the $5k option is for.

Wrapping Up

Go-live analysis doesn't have to be a one-time PowerPoint someone puts together the week after a launch. Built right, it's a living analysis — re-run it monthly, hand it to your team, let it update automatically as new data comes in.

The key is removing the human bottlenecks: no manually assigned dates, no copy-pasting into Excel, no waiting on your BI team's sprint cycle. The notebook does the analysis; you do the thinking.

Grab the free template below and see if the structure fits your data. If it does, the notebook will get you to a shareable report in under an hour.

Free Template

Get the Google Sheets template free.

Drop your email and I'll send you straight to the template. No waiting, no spam — just the file.