Your Amazon listing content is arguably the most important element of your success on Amazon. But optimizing product listings is not easy. Striking the perfect balance between keyword-rich copy and compelling content that speaks to consumers takes a lot of trial-and-error. This is where split testing comes in handy.
Fortunately, there are some free resources to help make the trial-and-error process more productive. Known on Amazon as “Manage Your Experiments,” split testing is the art of AB testing changes to listing content. This is accomplished by creating two variations of the listing, a control (A) and a test (B) and serving half of the page visitors with version A and the other half with the version B, then comparing their success.
We explore this tactic in our on-demand webinar that we co-hosted with Sellzone. Jennifer Johnston, Digital Marketing Specialist at Kaspien, and Dan Saunders, Marketing Consultant at Sellzone, dive into the best ways to execute split testing and how this process can maximize your profits
E-commerce competition is steep, but even with new platforms emerging, Amazon was still responsible for 41% of all US e-commerce as of October 2021. Lots of traffic brings more sellers and an increased need to stand out from the crowd. Optimizing listings is crucial for that endeavor, but with the Amazon algorithm’s exact details shrouded in mystery, it can be hard to tell what listing changes can affect the most change. AB testing allows for quantitative analysis of proposed changes, producing numerical justification for changes and increased opportunity for profit.
Amazon offers the ability to test the following fields:
Third-party testing tools also allow you to test the following fields:
Not all listings are eligible to be split tested. When you click on Create a New Experiment within your Manage Your Experiments, the dashboard conveniently lists all products eligible for testing and explains why others are non-eligible, often due to low traffic.
In a perfect world of infinite time and resources, one might be tempted to split test every change implemented into every listing. Unfortunately, this is not only impractical, it might also not be as helpful as it sounds.
AB testing is not designed for testing multiple options at once. The more variables in play, the harder it becomes to tie a result to a specific change. That is why split testing is best for “either/or” situations. You might have many ideas to improve a listing, but it’s important to narrow them down to the ideas that have the best chance of doing the most good. If version A represents the listing as it initially appeared and version B has completely different titles, images, and A+ content, then you may have proven the success of one version over the other, but you’ve done very little to understand the root cause of results.
This does not mean that you should run a separate test for each word changed, but that keyword optimization should be a different split test than an updated version of a product video. Amazon recommends testing changes that are significant in magnitude, not quantity. In addition, only one AB test can be run on each product at any given time. If you’re hoping to test multiple factors, each will have to be run consecutively.
Your testing period should range between 4-10 weeks. The longer you’re able to run tests, the more statistically valid they will be. Short tests might sound convenient, but outside forces that occurred during that time threaten the validity of your results.
Jennifer Johnston explains, “Rogue sellers pricing down, buy box issues, lightning deals, and listing suppressions can derail a short test SO quickly. With a longer test you’re able to get additional data to normalize any impact from those potential factors.”
Other factors that impact short tests more than long ones include seasonality (short test during December vs. March could show different results), off-market advertising and promotions. If you are running any advertising on or off Amazon for products being tested, attempt to keep efforts as consistent as possible throughout the testing period to avoid impact.
It is also important to note that experiments cannot be changed once started. Once canceled, you will have to start a new test from scratch and results from each test will not be linked.
Amazon is an ever-evolving landscape with new competitors entering the marketplace every day. The key to success is to be flexible and adapt to what works. Split testing with Amazon Experiments is a tool in every sellers toolbox that allows you to make statistically significant improvements that make a quantitative difference in the success of your listings. Not sure what to test? Amazon has a great list of ideas for experiments to get you started.
If you like what you’re reading, consider subscribing to our weekly blog! Our experts cover the latest profitability news and can help you build a successful strategy.