ArcOS
← Case Studies
WebCMSAnalyticsContent

A Content Platform Built to Be Understood

How a content-heavy web product gained full editorial control through a headless CMS and used detailed behavioural analytics to make every decision with evidence.

August 2025

The client had a web product full of content their team couldn't edit without a developer, and user behaviour data they couldn't act on because they couldn't see it. Fixing both problems changed how the entire organisation made decisions.

The situation

The client ran a content-driven web platform — a product where the value was largely in what was on the page: articles, guides, landing pages, promotional content, and structured product information. The engineering team had built it well. The content, however, was locked inside the codebase.

Updating a headline, restructuring a landing page, or adding a new content section meant a developer, a pull request, a review, and a deployment. For a team whose competitive advantage was the speed at which they could respond to market changes, this was a significant constraint.

The second problem was less visible but equally damaging. The team had basic traffic analytics — page views, sessions, bounce rate — but no understanding of what users actually did on the page. Which sections did they read? Where did they stop scrolling? What did they click before they converted, and what did they click before they left? The data existed in aggregate but carried no signal.

The brief was to solve both. Give the editorial team full control of the content without touching the codebase. And give the product team the visibility they needed to understand user behaviour in enough detail to improve it.


Decoupling content from code

The first step was separating the content layer from the application layer.

The existing architecture had content embedded in the frontend — hardcoded strings, static assets, and page structures defined in component files. This was not unusual for a product that had grown organically, but it meant content and code were entangled in a way that made both harder to change.

The new architecture introduced a headless CMS as the content layer. All editable content — page copy, media, structured data, navigation, and configuration — was moved into the CMS. The frontend became a consumer of that content rather than a container for it.

The CMS was modelled carefully. Content types were designed to reflect the actual editorial workflow, not the technical structure of the frontend. An editor creating a new article thought about sections, media, metadata, and audience — not components or props. The content model was the language of the editorial team, and the technical implementation translated that language into the rendered product.

What the editorial team could now do

After the migration, the editorial team could:

  • Publish new pages and articles without any engineering involvement
  • Restructure landing pages by reordering, adding, or removing content sections
  • Schedule content to go live at a specific time without a deployment
  • Maintain multiple drafts and preview them in context before publishing
  • Run localised content for different markets from a single interface

The first month after launch, the editorial team published more new content than they had in the previous quarter. Not because they were working harder — because they no longer had to wait.


Understanding what users actually do

The analytics integration was designed from the ground up to answer specific questions the team had, rather than to collect as much data as possible.

Before any tracking was implemented, the product team ran a structured session to identify the decisions they needed to make. Which parts of the page were worth investing in? Where was the funnel losing users? What content correlated with conversion, and what content correlated with drop-off?

These questions shaped the instrumentation. Every event tracked had a named owner and a specific question it was designed to answer. Generic "track everything" instrumentation produces noise. Purposeful instrumentation produces signal.

Behavioural events

Tracking was layered across several levels:

Scroll depth was measured on every content page. Not as a single percentage, but as milestone events — 25%, 50%, 75%, 100% of page depth — so the team could see where readers were stopping and how this varied by content type, traffic source, and device.

Section engagement tracked which content sections users interacted with — clicks, hovers on interactive elements, time in viewport for each section. This gave the editorial team something they had never had before: evidence about which content was being read and which was being skipped.

Funnel events tracked the specific sequence of actions users took before converting or leaving. The team discovered that users who interacted with a particular type of content early in a session were significantly more likely to convert. This finding changed how that content was placed — it moved from a secondary section to a primary one. Conversion improved.

Rage clicks and dead clicks identified places where users were clicking on elements that weren't interactive, or repeatedly clicking in frustration. Several of these indicated genuine UX problems — affordances that looked clickable but weren't, or interactive elements that weren't responding fast enough on mobile. Each finding led to a fix.

From data to decisions

The analytics data was surfaced in a dashboard the product team used in their weekly review. Every content change was accompanied by a before/after comparison. Experiments were structured with a clear hypothesis, a defined measurement period, and an explicit success criterion before they launched.

Over six months, the team made seventeen significant content and UX changes driven by analytics evidence. Fourteen of them improved the measured metric. The three that didn't were reverted quickly, because the measurement infrastructure made it easy to see when something wasn't working.


The technical foundation

The architecture that made this possible had a few key properties.

The frontend was statically generated where possible, with selective dynamic rendering where necessary. This meant fast load times by default, which mattered both for user experience and for the accuracy of the analytics — slow pages produce distorted behaviour data as users leave before content loads.

Content was fetched at build time for stable pages and at request time for personalised or frequently updated content. The CMS published a webhook on content changes, triggering a targeted rebuild of affected pages rather than a full site rebuild. Editorial changes were live in under two minutes.

The analytics layer was fully owned by the client. All behavioural data was collected, stored, and processed on infrastructure the client controlled. No third-party service had access to raw user behaviour data. This was a deliberate decision that simplified compliance, gave the team full access to their own data, and removed a dependency on external pricing and terms.


Outcomes

Twelve months after launch:

  • Editorial publishing frequency increased fourfold. The content team shipped content at the pace the business needed without engineering involvement.
  • Average page performance improved significantly. Static generation and disciplined asset management reduced median load times, which correlated with improved engagement metrics.
  • The product team made every significant UX decision backed by data. The phrase "I think users prefer..." disappeared from product discussions. It was replaced by measurement.
  • Three major content restructures were validated before full rollout by testing with a portion of traffic. The ability to test, measure, and commit — or revert — made the team more confident and more willing to experiment.
  • Zero content-related production incidents. The CMS preview environment caught every editorial error before it went live. Structured content types prevented malformed data from reaching the frontend.

What this kind of project requires

CMS integrations and analytics implementations both have a reputation for being straightforward projects that turn out not to be. The reason is usually the same: the technical work is the easy part. The hard part is the content modelling and the measurement strategy.

A content model that doesn't reflect how the editorial team thinks creates friction that never goes away. Instrumentation that collects data nobody knows how to act on creates noise that obscures the signal.

Both problems are solved before any code is written, by understanding the people who will use the system and the decisions they need to make. The technology is the implementation of that understanding — not the starting point.

Let's build something that lasts.

Tell us about your product and we'll be straightforward about what it takes.