· Pauline · How-to  · 6 min read

The fastest way to fail a Customer Insights project (and the checklist that prevents it)

Download included. Invite marketing directly, build one journey with real data, and use shadow sessions. This is the compressed playbook that prevents “technically live, practically dead.”

Download included. Invite marketing directly, build one journey with real data, and use shadow sessions. This is the compressed playbook that prevents “technically live, practically dead.”

Please be aware: The content is accurate at the time of creation. It may be that Microsoft has made changes in the meantime.

The fastest way to fail a Customer Insights project is to treat it like a tool rollout and invite marketing “when it’s ready”.

That approach creates a very predictable outcome: a technically correct setup that doesn’t match how marketing actually works. People then “don’t have time” to use it, create workarounds, or push everything back to agencies and Excel. Adoption becomes a training problem, the project turns into a blame game, and you’re stuck defending a platform no one asked for.

There’s a better way. And it’s not magic. I tried writing a complete checklist for this a while ago. It got ridiculously long. And of course, as every consultant I have to say: it depends because no two Customer Insights projects are the same. So I tried to compressed it on purpose and kept the parts that, in my experience, make the biggest difference.

You can download it here and read on if you want my reasoning behind the steps.

The mindset shift: stop collecting requirements, start creating shared reality

Most projects follow a familiar rhythm: requirements, design, build, test, train, go live.

Marketing automation projects don’t necessarily behave like that because marketing work is not static. It’s campaign cycles, approvals, content bottlenecks, last-minute changes, “we need this tomorrow”, and a hundred tiny decisions that never make it into a requirements document.

So the goal in the first weeks is not a perfect design but to understand:

  • what marketing actually does day to day
  • what data is truly available (and usable)
  • what integrations are real, not assumed
  • what friction points will kill momentum if you ignore them

And one more, because it’s the thing everyone underestimates: how the work moves through the organisation. Who writes content? Who approves it? How long does legal take? What happens when someone is on holiday?

Step 1: Bring marketing in on day one

Day one involvement doesn’t mean a formal requirements workshop where everyone nods politely and nothing changes.

Here’s a practical structure that can work well: start with one working session that focuses on how marketing gets work done, where they lose time, and what “good” looks like under pressure. You’re not capturing every wish. You’re identifying the few campaign types that actually matter, and the constraints you can’t ignore (approval chains, content lead times, compliance, segmentation habits, channel limitations).

A small move that makes a big difference: pick one marketing person or key user as your design partner. Not a stakeholder who attends steering committees. A real user who will build with you, challenge assumptions, and tell you when something is confusing.

Step 2: Build one small journey first with real content and real data

This is the point where many projects wobble, because everyone wants to build the “proper setup” first: data model, all integrations, a full segment strategy, a library of assets, and full-on campaigns.

In practice, you learn more by doing the opposite: Build one small journey end-to-end as early as possible. Not twenty segments. Not a “template framework.” One journey that uses real content and real data and can actually run.

You’re looking for a journey that is simple enough to finish quickly, but realistic enough to surface the reality. Something like a form submission follow-up, an event registration sequence, a welcome flow.

When you do this early, the project stops being theoretical. And that’s when the useful problems show up:

  • content approvals take longer than the project plan
  • consent logic is unclear
  • identity matching behaves differently than expected
  • users create segments for everything because triggers feel risky
  • naming becomes chaos on day three
  • “simple changes” require three systems and two teams

These are exactly the problems you want to discover while you still have room to adjust.

Step 2.1: Before you build “segment strategy”, set one rule that prevents chaos

Most teams don’t create too many segments because they love segments. They do it because segments feel like control.

Triggers feel scary (“what if it fires wrong?”), and segments feel safe (“at least I can inspect it”). So people start building segments for everything. And after a few months you get a segment graveyard no one dares to clean up.

So I like to set a simple rule early: Segments are products, not project artifacts. Every segment needs an owner and a reason to exist.

Step 3: Do shadow sessions and turn what you learn into small templates

Once the pilot journey exists, do the thing that changes adoption faster than any training plan: shadow sessions.

Sit with marketing users while they build and run things. “Show me how you do it.”

People over-engineer because they’re unsure. They duplicate assets because they can’t find them. They build segments for every tiny campaign because it feels like the “proper” approach. They avoid certain features because one confusing experience broke trust.

After shadow sessions, don’t respond with a giant governance document. Respond with a handful of working principles and reusable building blocks.

Here are the three “templates” you can create first:

1) A naming cheat sheet that fits on one screen Journeys, emails, segments, triggers.

2) Two or three journey patterns Example: welcome, event follow-up, re-engagement. They give people a safe starting point.

3) A “definition of done” checklist for journey activation The tiny set of checks that prevents panic: consent link present, suppression applied, test contact worked, approval recorded, metric defined.

If you do use standardisation, keep it lightweight and practical: predictable naming, two or three journey patterns that people can copy without thinking too hard.

Step 4: Enablement that happens during the build (not a training day at the end)

Training fails when it’s detached from real work. So instead of planning “a big training” at the end, you can do micro-enablement:

  • 20 minutes right after a feature is introduced
  • a short recording (“how we do preference checks here”)
  • one practical exercise (“build and activate a tiny journey in DEV”)

The goal is independence: Marketing should be able to build and run the basic stuff without needing a rescue mission.

What to measure, if you want a system people actually use

If you define success as “go-live,” you’ll miss the point. The real question is: can marketing execute on their own?

Look for signs of confidence:

  • users can build a basic journey without constant rescue
  • assets are reused rather than duplicated
  • segments don’t explode endlessly “just in case”
  • people trust the results enough to act on them

Summary

Customer Insights doesn’t fail because of missing features. It fails when it’s treated like a tool rollout: build first, invite marketing later.

I think, the fix is simple: co-build from day one, ship one real journey early, learn through shadow sessions, and turn the learnings into a few standards people can actually follow.

I bundled this into a practical checklist you can use immediately. Download the Customer Insights implementation checklist (Technical + People-First)

Do you have questions, ideas or remarks? Feel free to get in touch.

Back to Blog

Related Posts

View All Posts »