The conversion optimisation program might seem like an effortless and a smooth mechanism. However, in reality, businesses have realised how challenging the process is. This highlights the scope of work for the owners to optimise the program by creating customer-centric experiences and maximising revenue for their companies. Nonetheless, the most common question is, “What actions can help businesses achieve outstanding results in optimising the experimentation program and make it more effective, data-driven and ROI focused?”
Here are a few recommendations:
Sign up for your weekly dose of what's up in emerging technology.
Audit The Current CRO Process And Performance
Today, businesses have to clearly define the objective behind individual tests and day to day analysis to look at the bigger picture of the optimisation program in its entirety and establish key performance indicators to assess the current CRO process around Velocity, Quality and Business Impact.
Velocity: Evaluate how many test ideas get added to the optimisation roadmap every month and how many ideas convert to a live test every month. This will give some context to the gap. For instance, if somebody is short of ideas or if all the requests have been accommodated without prioritising or selecting ideas that have high potential.
Quality: In continuation to the previous point, once a hypothesis is selected with due diligence, how many of them reach a statistical significance? A VWO research shows 14% win-rate for all the tests that went live for their clients. This could be an external benchmark to compare and optimise. Win here is not the result that shows variation as a winner, but all the experiments reached a statistically significant level and provided some learning.
Business Impact: Measure average conversion lift for all statistically significant tests in terms of revenue per visitor and other KPIs. Extrapolate the winner impact in terms of revenue but do consider factors like seasonality. Ultimately, extracting the ROI from the optimisation program would be noteworthy.
Ensure KPIs, Strategies And Tactics Are Aligned
Identify business objectives and related metrics that represent the same and compare them to the goals set for the year. Understand business priorities, challenges, including online revenue goals, paid channels, funnels, etc. Discuss with the sales and marketing team to discuss the buyer’s journey, persona, and content strategy across persona and sales stages.
This information would help create a goal tree which would inspire us to experiment a new tactic the moment we see some issue from analytics and vice versa. Prioritise KPIs that are closer to revenue, followed by metrics that align with cost saving like CAC, CPL.
Leverage Consumer Psychology And Research On Cognitive Biases
While brainstorming the optimisation tactics that could drive the KPI and Strategy defined in the goal map, a great source of enrichment would be to align it with cognitive biases and persuasion principles. This will help you connect with the prospect and influence their choices and decision-making. Additionally, it would provide more context and story behind the final test results.
Being thoughtful about cognitive biases while crafting a new hypothesis helps to gain an undue advantage to anticipate user responses. This might sound like a felony, but as long as the offering has a real value, this becomes a legitimate tool to start a conversation and move the prospect deeper into funnel stages.
Enrich And Validate Ideas Roadmap With Multiple Sources
Experimenting velocity is at the top of the funnel metrics that focus on quantity and to improve that one needs to acquire a sense of how we are sourcing new test ideas and building optimisation roadmaps. Include as many data sources as possible to validate the hypothesis as well as launching new ideas that can solve the customer’s objections and challenges, such as issues identified from User testing, Mouse tracking, Heatmap, user recording, online survey, user feedback, and online chat history, etc.
Create A Prioritisation Framework To Ensure The Best Ideas Bubble Up
The traditional approach of prioritisation like PIE (potential, impact, ease), ICE (impact, cost, effort) have been under debate. So it is a good idea to have a hybrid approach and assign a score to the idea based on specific and measurable parameters. A person can build his/her own scoring framework. To begin with, one can start with some of the scoring factors that are self-explanatory and move to more behavioural factors; after reaching a certain maturity in understanding the impact of cognitive biases, user motivation, etc.
Don’t Stop Testing Just Because You Saw 95% Significance
One has to be very cautious to not get trapped by false positive results. People have to be certain that there is a lot of noise in the data collected in the initial days. To avoid this situation, it would be a good idea to get a sense of how many visitors would be required for each variation. Using a free tool like Optimizely Sample Size calculator will give an idea of the overall traffic required and the number of days to run the test before declaring a winner or learning from a statistically significant result.
Experimentation And Personalisation Are BFFs:
The most significant piece in personalisation puzzle is first, second and third party data about the users that can form a meaningful segment. The segment could be based on gender, age, browser, device, campaign source, industry, company size, location, day parting, a new visitor, historical transaction etc. But the new experience tied with the segments needs to be validated in terms of business outcomes otherwise it is as good as the test hypothesis. A lot of legwork that was done in building a robust experimentation program like goal tree creation, goal mapping, hypothesis prioritisation, sample size calculation, would also enrich personalisation strategy and roadmap for the website.
Create A Knowledge Management Repository For All Significant Results
The best part of an optimisation program is that from all statistically significant results someone either wins a lift or at least learns from the experience but never fails. It is imperative to document these golden nuggets in a standard format so that stakeholder from marketing and product management teams can anytime refer and apply to new sections, similar pages, application or website revamp.
Following is the step by step structure of the entire experimentation process:
Hence, current businesses need to do a lot beyond experimentation and personalisation to achieve actionable results and transform ideas to get the best solution out of the CRO program including the challenges they face and the ultimate solution they launch.