A day in the life of a Media Buyer
In-app mobile marketing campaign optimization has never been an easy task. It requires a lot of daily manual data analysis in order to find an optimal configuration for each campaign you are running. Ultimately, at some point it becomes unmanageable, thus limiting the amount of campaigns a media buyer can effectively manage.
At BidMotion, we have developed a technology that not only automates the process by plugging into DSPs, but also helps a MB (media buyer) significantly reduce the amount of time spent in manual analysis of data. Using big data techniques and machine learning algorithms, our technology provides a list of recommended actions for each campaign. In this way, we increase the number of campaigns each MB is handling effectively, reducing the cost, and even finding some optimization points that are sometimes difficult for human eye to spot.
To understand the automated techniques, it is helpful to explore the ways the technology helps the manual process.
How do we do that? Let’s start with some basic advertising technology notions
First, it’s important to clarify some concepts. At BidMotion we don’t work at impression level (typically banner or video display) – we buy our media on ad networks, ad exchanges, DSPs; which are the ones connected directly to the RTB space -, so we don’t have visibility over them. However, we track and store all clicks related to our campaigns, some leading to app installs (what we call conversions), and some of them leading further to LTV (life time value) events (actions taken by the user within the app, such as registration, level completion, in app purchase, etc).
Then, we define CTR (click through rate) as a percentage of impressions that end up in a click, and CR (conversion rate) as percentage of clicks that end up in a conversion (app install). Finally, we define revenue as the amount of money we get paid for promoting the app.
We work with direct advertisers in order to promote their apps (what we call offers) through performance-based user acquisition, meaning our entire focus is on conversion quality and not quantity. That means we don’t want just random people to download a given client app; we want ‘’good’’ and engaged users (as defined by each advertiser, from people who access an app daily, prove engagement, and who do lot of in app purchases). Then, depending on predetermined parameters, we are compensated based on CPI (Cost per Install, a.k.a. conversion) or CPA (Cost per Action, a.k.a. specific LTV event).
Now having mentioned these basic campaign optimization notions, let’s get back to the topic. But first, what’s a campaign? It’s very simple. Each offer (app we are promoting) is available for different devices (Android, iPhone, iPad, etc.) and for different countries (United States, United Kingdom, France, etc). Each combination of Offer – Device – Country has a different CPI or CPA, and there could be one or more different MBs (media buyers) creating each one or more different campaigns over the given Offer – Device – Country combination. So, in the end, aggregating all data (clicks, conversions, LTV events) from all campaigns gives us a bunch of combinations like the following:
Offer – Device – Country – Campaign – Media Buyer
Just to remember, our goal is to optimize campaigns. In order to do that, we have to play with some extra available pieces of information (dimensions) in correlation to each data point:
- ISP (Internet Service Provider)
- Connection Type (3G, 4G, WIFI, etc.)
- Region (Alabama, California, etc.)
- City (New York City, New Delhi, etc.)
- Device Brand (Samsung, HTC, iPhone, etc.)
- Device Model (Galaxy S4, Galaxy S5, iPhone 6, etc.)
- Publisher (platform placing the ad impression)
- Site App (actual app where the ad impression is placed)
- and more
Basically, manual and daily work of each MB consists of analysing past/recent data of each campaign in order to find out combinations of N-dimension / values on which optimize a given KPI (CR, revenues, LTV retention, LTV in app purchases, etc). This will be easily understood with the following 2 dimensional example.
The following table explains behaviour of last 2 days for:
Offer O – Device D – Country C – Campaign C1 – Media Buyer MB
Clicks Conversions ISP Connection Type
10,000 200 ISP1 3G
10,000 25 ISP1 WIFI
10,000 100 ISP2 3G
10,000 5 ISP2 WIFI
After brief analysis, in order to optimize the CR, the MB will understand that for C1 in the given Offer O – Device D – Country C, 3G performs much better than WIFI, and ISP1 has roughly double CR than ISP2. Also, the MB should blacklist targeting ISP2 and WIFI, because ¼ of traffic is going there and it’s producing almost no installs.
As you can see, optimising campaigns is pretty intuitive, but when we increase the number of dimensions to analyse (and each dimension can have several different values) and considering each MB has lot of campaigns to optimise, it becomes a really time-consuming and tedious task.
So what if we could create something that automatically crunches all data on a daily basis (clicks, conversions, LTV events) of past days and automatically identifies what should be optimized and what should be blacklisted for each campaign? This will save a huge amount of work for each MB. And that is precisely where programmatic comes into play. And now it gets interesting…
Step one of programmatic: Clustering
The first thing to do is to separate our universe of data (clicks, conversions, LTV events) into separate micro-universes in which we have to find the optimisation points, because the optimisations even in the same offer can vary a lot from one device to another or from one country to another. So, as mentioned, let’s assume we have clustered our data into mulriple different data buckets as per the following:
Offer – Device – Country – Campaign (implicitly one campaign is managed by one Media Buyer).
Step two of programmatic: Insights generation
This point, the most crucial, is to crunch each data bucket (separately, never cross-bucket data) in order to intelligently extract the combinations of dimension values that optimize positively or negatively the data of the given bucket, so the MB can use this information to refine the current campaign.
We generate different types of insights (identification of good / bad dimensional spots), which are:
- Clicks Patterns – sub-cluster the bucket data, useful to understand the source of the media acquisition
- High CR – defines what optimizes the CR
- Low CR – points out what is especially bad in terms of CR and should be blacklisted
- High Revenue – finds out what increases the revenue of a given campaign, typically optimizing number of installs since most of offers run on a CPI basis
- High LTV Revenue – identifies what optimizes in-app purchases, very good to find quality users
- High LTV Retention – explores what optimizes finding users that frequently use the app
In order to generate these insights, three different Machine Learning techniques are used:
- Factor Analysis
- Decision Trees
- Heavy Hitters
Step three of programmatic: Deduplication and filtering
There are potentially an infinite number of insights that can be generated for each MB every day. So the next goal is to identify duplicate insights for removal (sometimes the same combination of dimension values can optimise High CR and High Revenue, so in this case it’s good to keep just one, the most relevant). In the end, a fair number of insights are not very relevant because they don’t offer high value in terms of results optimization (1% CR improvement, for example), so it’s good to be aggressive and keep only what’s very relevant (100% better CR, for example) in order to not flood the MB with minor insight points.
Step four of programmatic: Sorting and learning from the user
Cool! We have generated a lot of insights, we have deduplicated them and we also removed irrelevancies, ensuring to keep only the very good ones. But there still remains the issue of quantity; there can be potentially hundreds of insights generated every day for each MB even after these reduction processes, since the amount of campaigns that each person can manage can be huge. Consequently, it becomes very important to establish a global accepted ranking nomenclature (offer-device-country-campaign), in order to present the information in a useful way, drawing attention to what is most relevant across all campaigns.
As a secondary verification point, we receive programmatic feedback from the MBs (who are asked to rate the provided insights) in order to understand if what the technology is generating is relevant for him/her. For example, our algorithms may think ISP and Connection Types insights for a given campaign are best ones so they get ranked (and therefore displayed) higher, but a given MB can be especially interested in Region and City insights. So if he rates these last ones positively and the first ones negatively, the system is going to understand this need and self-learn; showing region and city insights with better ranking the following day.
What does this mean for Media Buyers in MarTech?
Developing this system in order to provide daily insights to optimize mobile marketing campaigns is able to disrupt the industry of advertising technology. In this way, we have greatly reduced the time our MBs need to spend on manual reports for each campaign in order to understand what works and what doesn’t work. Instead of heavy report volumes to analyse, our media buyers just receive a pre-formatted morning list of insights, equivalent to the conclusions of manual analysis.
Less tedious manual work for MBs means we can increase the number of campaigns the system can handle at any given time with no additional resources, rendering the entire process more efficient and enabling MBs to focus on targeting quality conversions.
Looking for more? Stay tuned for a series taking a deeper dive into programmatic technology!