Why don't airlines do A/B testing in Revenue Management?
The airline industry’s pricing infrastructure is not set up for granular testing
Why don’t airline Revenue Management teams do A/B testing? These simple experiments offer one option to a certain set of consumers and a different option to another. Whichever performs the best is the one to stick with.
A/B tests are commonly used in many industries to test social media campaigns, product pricing, web design and even voter preferences at the ballot box. Many airlines use A/B tests to design their websites.
They should be a slam-dunk success story for Revenue Management (RM). Yet these experiments are something that this part of the industry never really seems to talk about.
What is A/B testing all about in RM?
The potential for A/B testing in RM should be vast.
Airlines could test different price points, bundles or fare rules in real time. They could compare one inventory management strategy with another. And they could figure out whether or not their discount and promotion strategy really works.
A/B testing will be the key to creating personalised offers in the offer-order framework.
With up to 1% of revenue in the annual RM capex budget for a carrier that takes revenue things seriously, we should have seen plenty of “what if?” testing products and consumer tracking tools that come to market. Yet we have not.
Instead, the same old vendors provide the same old “solutions”. They are expensive, take years to implement and do not give airlines room to be nimble.
How is A/B testing in RM different to A/B testing in other parts of the airline?
RM is quite a specific part of the airline business. To avoid confusion, it is distinct from Ancillary Revenue (selling seats, bags, hotel add-ons etc…) and e-Commerce (managing the airline’s Internet Booking Engine, or IBE for short).
Sometimes these teams work closely with or might in some cases even be part of RM. But whatever the organisational arrangement at a specific airline, for our purposes here they are different.
RM does include pricing travel packages that bundle flights and other services, especially at airlines that are enthusiastic about offer-order.
A/B testing in RM is about core conversion, figuring out what inspires people to accept a price for a set of flights and book a ticket. This must all be done at the price set by the airline based on the demand forecast and pricing strategy in place at the time.
So A/B testing in RM is a test of the airline’s underlying RM and the accuracy of it’s demand forecast and willingness to pay analysis.
Successful A/B testing for airline RM requires three things.
First is a way to split the traffic of travel shoppers randomly, so if you are looking to buy a flight from say Hong Kong to Singapore you might either get different flight and price options to me, even when we are both looking at the same flights.
If you get the same flight and price options as me but they are presented differently, for example cheapest-first or earliest-first, that is A/B testing in web design, not RM.
Second is a consistent measurement of how specific travel buyers navigate the booking and ticketing process.
For example you might look for your preferred time or cabin first. I might choose to look for a specific flight or flexible conditions first. This sort of buying behaviour needs to be monitored and understood for A/B testing to be effective.
Finally, airlines doing A/B tests in RM need to determine the proportion of buyers who abandon shopping at specific points in the process.
A travel shopper who abandons the process after seeing the price of the outbound flight only may exhibit quite different behaviour from one who prices up the flights and then moves no further. Somebody who enters their name and chooses a seat may be more different still.
A/B testing in RM needs to understand all this complex consumer behaviour.
But surely airlines know all this already?
They do not. When British Airways sent me an e-mail telling me that I could see everything they had on file about me, I took them up on their offer.
I was shocked. They knew almost nothing about me, apart from my name, e-mail, phone number, birthday and the flights I had taken. Among other things, I was surprised that they did not know:
1. My most frequent searches
2. The holiday and business trips I was planning at the time
3. My favourite seats, the ones I always choose if they are available
4. My food and drink preferences
5. How many bags I check in or the routes where I do and do not check a bag
6. When I do and do not buy excess baggage.
British Airways has long been focused on customer “propensity” analysis, which is supposed to cover this, for as long as I can remember. Yet they do not even understand the easy wins like excess baggage.
When it comes to airline systems, British Airways is meant to be up there among the leading carriers globally. Yet their best is clearly far behind what I would expect.
So why are airlines not achieving A/B testing in revenue management? There are five explanations.
Explanation 1: there are significant structural challenges to A/B testing
The goal of A/B testing in RM may well be conversion, but measuring that in reality will be tough.
Especially when there are so many points that a shopper could fail to place an order and leave the buying process, like baggage choice, seat selection and hotel add-ons.
Figuring out when a transaction did not proceed due to the price of underlying flights and when it was due to other issues is always going to be tricky.
There are three structural challenges to A/B testing for RM teams.
Gentle reminder: most articles are now for paying subscribers. If you have subscribed already, thank you for supporting me. If you are subscribing now, I am offering 25% off here (this deal ends soon…).