Deconstructing the App Store Rankings Formula with 5 Mad
Science Experiments
After seeing Rand's "Mad Science Experiments in SEO" presented at last year's MozCon, I was
inspired to put on the lab coat and goggles and do a few experiments of my own--not in SEO, but in
SEO's up-and-coming younger sister, ASO (app store optimization).
Working with Apptentive to guide enterprise apps and small startup apps alike to increase their
discoverability in the app stores, I've learned a thing or two about app store optimization and what
goes into an app's ranking. It's been my personal goal for some time now to pull back the curtains on
Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in
my way.
Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.
As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or
down can make all the difference when it comes to your website's traffic--and revenue.
In the world of apps, ranking is just as important when it comes to standing out in a sea of more
than 1.3 million apps. Apptentive's recent mobile consumer survey shed a little more light this claim,
revealing that nearly half of all mobile app users identified browsing the app store charts and search
results (the placement on either of which depends on rankings) as a preferred method for finding
new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.
Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a
complex and highly guarded algorithms for determining rankings for both keyword-based app store
searches and composite top charts.
Unlike SEO, however, very little research and theory has been conducted around what goes into
these rankings.
Until now, that is.
Over the course of five experiments analyzing various publicly available data points for a cross-
section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps,
I'll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to
assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few
of the factors commonly thought of as influential to an app's ranking.
But first, a little context
Image credit: Josh Tuininga, Apptentive
Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores
feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically,
be on a fairly level playing field in terms of search volume and competition.
Of these apps, nearly two-thirds have not received a single rating and 99% are considered
unprofitable. These experiments, therefore, single out the rare exceptions to the rule--the top 500
ranked apps in each store.
While neither Apple nor Google have revealed specifics about how they calculate search rankings, it
is generally accepted that both app store algorithms factor in:
Average app store rating
Rating/review volume
Download and install counts
Uninstalls (what retention and churn look like for the app)
App usage statistics (how engaged an app's users are and how frequently they launch the app)
Growth trends weighted toward recency (how daily download counts changed over time and how
today's ratings compare to last week's)
Keyword density of the app's landing page (Ian did a great job covering this factor in a previous Moz
post)
I've simplified this formula to a function highlighting the four elements with sufficient data (or at
least proxy data) for our experimentation:
Ranking = fn(Rating, Rating Count, Installs, Trends)
Of course, right now, this generalized function doesn't say much. Over the next five experiments,
however, we'll revisit this function before ultimately attempting to compare the weights of each of
these four variables on app store rankings.
(For the purpose of brevity, I'll stop here with the assumptions, but I've gone into far greater depth
into how I've reached these conclusions in a 55-page report on app store rankings.)
Now, for the Mad Science.
Experiment #1: App-les to app-les app store ranking volatility
The first, and most straight forward of the five experiments involves tracking daily movement in app
store rankings across iOS and Android versions of the same apps to determine any trends of
differences between ranking volatility in the two stores.
I went with a small sample of five apps for this experiment, the only criteria for which were that:
They were all apps I actively use (a criterion for coming up with the five apps but not one that
influences rank in the U.S. app stores)
They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be
stickier at the top--an assumption I'll test in experiment #2)
They had an almost identical version of the app in both Google Play and the App Store, meaning they
should (theoretically) rank similarly
They covered a spectrum of app categories
The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five
apps represent the travel, finance, education banking, and social networking categories.
Hypothesis
Going into this experiment, I predicted slightly more volatility in Apple App Store rankings, based on
two statistics:
Both of these assumptions will be tested in later experiments.
Results
Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store
rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23
positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard
deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of
Google Play.
Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then
standardized the ranking volatility in terms of percent change to control for the effect of numeric
rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a
24-hour period while Google Play rankings changed by no more than 9%.
Also of note, daily rankings tended to move in the same direction across the two app stores
approximately two-thirds of the time, suggesting that the two stores, and their customers, may have
more in common than we think.
Experiment #2: App store ranking volatility across the top charts
Testing the assumption implicit in standardizing the data in experiment No. 1, this experiment was
designed to see if app store ranking volatility is correlated with an app's current rank. The sample
for this experiment consisted of the top 500 ranked apps in both Google Play and the App Store, with
special attention given to those on both ends of the spectrum (ranks 1-100 and 401-500).
Hypothesis
I anticipated rankings to be more volatile the higher an app is ranked--meaning an app ranked No.
450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis
is based on the assumption that higher ranked apps have more installs, active users, and ratings,
and that it would take a large margin to produce a noticeable shift in any of these factors.
Results
One look at the chart above shows that apps in both stores have increasingly more volatile rankings
(based on how many ranks they moved in the last 24 hours) the lower on the list they're ranked.
This is particularly true when comparing either end of the spectrum--with a seemingly straight
volatility line among Google Play's Top 100 apps and very few blips within the App Store's Top 100.
Compare this section to the lower end, ranks 401-)500, where both stores experience much more
turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking
volatility in the Play Store and 28% correlation in the App Store.
To put this into perspective, the average app in Google Play's 401-)500 ranks moved 12.1 ranks in
the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store,
these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as
volatile as the highest ranked apps. (I say slightly as these "lower-ranked" apps are still ranked
higher than 99.96% of all apps.)
The relationship between rank and volatility is pretty consistent across the App Store charts, while
rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100
have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).
Experiment #3: App store rankings across the stars
The next experiment looks at the relationship between rank and star ratings to determine any trends
that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.
Hypothesis
Ranking = fn(Rating, Rating Count, Installs, Trends)
As discussed in the introduction, this experiment relates directly to one of the factors commonly
accepted as influential to app store rankings: average rating.
Going into the experiment, I hypothesized that higher ranks generally correspond to higher ratings,
cementing the role of star ratings in the ranking algorithm.
As far as volatility goes, I did not anticipate average rating to play a role in app store ranking
volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice
versa. Instead, I believed volatility to be tied to rating volume (as we'll explore in our last
experiment).
Results
The chart above plots the top 100 ranked apps in either store with their average rating (both historic
and current, for App Store apps). If it looks a little chaotic, it's just one indicator of the complexity of
ranking algorithm in Google Play and the App Store.
If our hypothesis was correct, we'd see a downward trend in ratings. We'd expect to see the No. 1
ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is
this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the
chart.
A closer examination, in tandem with what we already know about the app stores, reveals two other
interesting points:
The average star rating of the top 100 apps is significantly higher than that of the average app.
Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top
iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of
all rated apps in either store. The averages across apps in the 401-)500 ranks approximately split the
difference between the ratings of the top ranked apps and the ratings of the average app.
The rating distribution of top apps in Google Play was considerably more compact than the
distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was
over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more
heavily weighted in Google Play's algorithm.
Looking next at the relationship between ratings and app store ranking volatility reveals a -15%
correlation that is consistent across both app stores; meaning the higher an app is rated, the less its
rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store's
calculation of an app's current rating, for which I did not find a statistically significant correlation.
Experiment #4: App store rankings across versions
This next experiment looks at the relationship between the age of an app's current version, its rank
and its ranking volatility.
Hypothesis
Ranking = fn(Rating, Rating Count, Installs, Trends)
In alteration of the above function, I'm using the age of a current app's version as a proxy (albeit not
a very good one) for trends in app store ratings and app quality over time.
Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b)
each new update inspires a new wave of installs and ratings, I'm hypothesizing that the older the age
of an app's current version, the lower it will be ranked and the less volatile its rank will be.
Results
The first and possibly most important finding of this experiment is that apps across the top charts in
both Google Play and the App Store are updated remarkably often as compared to the average app.
At the time of conducting the experiment, the current version of the average iOS app on the top
chart was only 28 days old; the current version of the average Android app was 38 days old.
As hypothesized, the age of the current version is negatively correlated with the app's rank, with a
13% correlation in Google Play and a 10% correlation in the App Store.
The next part of the experiment maps the age of the current app version to its app store ranking
volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%)
while recently updated iOS apps have more volatile rankings (correlation: -3%).
Experiment #5: App store rankings across monthly active users
In the final experiment, I wanted to examine the role of an app's popularity on its ranking. In an
ideal world, popularity would be measured by an app's monthly active users (MAUs), but since few
mobile app developers have released this information, I've settled for two publicly available proxies:
Rating Count and Installs.
Hypothesis
Ranking = fn(Rating, Rating Count, Installs, Trends)
For the same reasons indicated in the second experiment, I anticipated that more popular apps (e.g.,
apps with more ratings and more installs) would be higher ranked and less volatile in rank. This,
again, takes into consideration that it takes more of a shift to produce a noticeable impact in average
rating or any of the other commonly accepted influencers of an app's ranking.
Results
The first finding leaps straight off of the chart above: Android apps have been rated more times than
iOS apps, 15.8x more, in fact.
The average app in Google Play's Top 100 had a whopping 3.1 million ratings while the average app
in the Apple App Store's Top 100 had 196,000 ratings. In contrast, apps in the 401-)500 ranks (still
tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth
(Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.
Considering that almost two-thirds of apps don't have a single rating, reaching rating counts this
high is a huge feat, and a very strong indicator of the influence of rating count in the app store
ranking algorithms.
To even out the playing field a bit and help us visualize any correlation between ratings and rankings
(and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I've
applied a logarithmic scale to the chart above:
From this chart, we can see a correlation between ratings and rankings, such that apps with more
ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40%
correlation in Google Play.
Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with
more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive
evidence was found within the Top 100 Google Play apps.
And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a
statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)
Among the top 100 Android apps, this last experiment found that installs were heavily correlated
with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in
Google Play. Android apps with more installs also tended to have less volatile app store rankings,
with a correlation of -16.5%.
Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in
broad ranges (e.g., 500k-)1M). For each app, I took the low end of the range, meaning we can likely
expect the correlation to be a little stronger since the low end was further away from the midpoint
for apps with more installs.
Summary
To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad
science experiments in app store optimization:
Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the
Apple App Store's top charts.
In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of
the average app
Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top
charts experience a much wider ratings distribution than that of Google Play's top charts
The higher an app is rated, the less volatile its rankings are
The 100 highest ranked apps in either store are updated much more frequently than the average
app, and apps with older current versions are correlated with lower ratings
An app's update frequency is negatively correlated with Google Play's ranking volatility but
positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an
app's most recent ratings and reviews.
The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest
ranked App Store apps
In both stores, apps that fall under the 401-500 ranks receive, on average, 10-20% of the rating
volume seen by apps in the top 100
Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29-
40% correlation between the two
Revisiting our first (albeit oversimplified) guess at the app stores' ranking algorithm gives us this
loosely defined function:
Ranking = fn(Rating, Rating Count, Installs, Trends)
I'd now re-write the function into a formula by weighing each of these four factors, where a, b, c, d
are unknown multipliers, or weights:
Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)
These five experiments on ASO shed a little more light on these multipliers, showing Rating Count to
have the strongest correlation with rank, followed closely by Installs, in either app store.
It's with the other two factors--rating and trends--that the two stores show the greatest discrepancy.
I'd hazard a guess to say that the App Store prioritizes growth trends over ratings, given the
importance it places on an app's current version and the wide distribution of ratings across the top
charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just
about have to have at least four stars to make the top 100 ranks.
Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts
in either store:
Weight of factors in the Apple App Store ranking algorithm
Rating Count Installs Trends Rating
Weight of factors in the Google Play ranking algorithm
Rating Count Installs Rating Trends
Again, we're oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional
factors including keyword density and in-app engagement statistics continue to be strong indicators
of ranks. They simply lie outside the scope of these experiments.
I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see
ASOs conducting the same experiments that have brought SEO to the center stage, and encourage
you to enhance or refute these findings with your own ASO mad science experiments.
Please share your thoughts in the comments below, and let's deconstruct the ranking formula
together, one experiment at a time.
https://moz.com/ugc/app-store-rankings-formula-deconstructed-in-5-mad-science-experiments

More Related Content

PPTX
ASO: A/B Testing your store listing
PDF
The Ultimate App Store Optimization Guide
PDF
Aso - App Store Optimization
PPTX
Priori Data State of the (App) Union - July 2015
PPTX
Priori Data - App Market Trends & 2016 Outlook
PDF
Distimo presentation
PDF
App Store Optimization By MobileDevHQ (for SMX East)
PPTX
App store optimization
ASO: A/B Testing your store listing
The Ultimate App Store Optimization Guide
Aso - App Store Optimization
Priori Data State of the (App) Union - July 2015
Priori Data - App Market Trends & 2016 Outlook
Distimo presentation
App Store Optimization By MobileDevHQ (for SMX East)
App store optimization

What's hot (18)

PDF
Ssrn id1924044
PDF
App promo-Best practices for App Store Optimization (ASO)
PPTX
State of The App Store & Role of Mobile Gaming, 2015
PDF
App Store Optimization
PPTX
ASO Barcamp Talk 3: How is the arrival of PPC in Google Play going to shake t...
PDF
App Stores - Category Analysis (Apple App Store)
PDF
U.S. iTunes App Store: Sellers
PDF
Google playstore, Android market study 2017
PDF
Use App Store Optimisation to increase your mobile profits
PPTX
App Store Optimization - Metrics, Organic Discovery, & The Future | SMX Muni...
PPTX
App Store Optimization - SMX Munich - Emily Grossman
PDF
Monthly focus i_os_japan_may_2014
PPTX
App-Promo Android Marketing TO Presentation
PDF
How to get featured by Apple & Google
PPTX
Reviews & Ratings - Thomasbcn 2016 @ Applause.io
PDF
Monthly Focus: Google Play_Japan_May/June_2014
PDF
App Marketing: Discover all of ASO techniques
PDF
Tug of Perspectives: Mobile App Users vs Developers
Ssrn id1924044
App promo-Best practices for App Store Optimization (ASO)
State of The App Store & Role of Mobile Gaming, 2015
App Store Optimization
ASO Barcamp Talk 3: How is the arrival of PPC in Google Play going to shake t...
App Stores - Category Analysis (Apple App Store)
U.S. iTunes App Store: Sellers
Google playstore, Android market study 2017
Use App Store Optimisation to increase your mobile profits
App Store Optimization - Metrics, Organic Discovery, & The Future | SMX Muni...
App Store Optimization - SMX Munich - Emily Grossman
Monthly focus i_os_japan_may_2014
App-Promo Android Marketing TO Presentation
How to get featured by Apple & Google
Reviews & Ratings - Thomasbcn 2016 @ Applause.io
Monthly Focus: Google Play_Japan_May/June_2014
App Marketing: Discover all of ASO techniques
Tug of Perspectives: Mobile App Users vs Developers
Ad

Viewers also liked (6)

PDF
Social Media with SME
PDF
แนวโน้มของเทคโนโลยีสารสนเทศ ในอนาคตและการพัฒนา ระบบสารสนเทศเพื่อรองรับ การทำธ...
PDF
Global digital statistics 2014
PDF
ผลการสำรวจมูลค่าตลาดสื่อสารเเละตลาดคอมพิวเตอร์ฮาร์ดเเวร์ 2555-2557
PDF
มูลค่าตลาดคอมพิวเตอร์ฮาร์ดแวร์ ตลาดบริการด้านคอมพิวเตอร์ และตลาดอุปกรณ์เครื่อ...
PDF
Asia pacific digital overview 2014
Social Media with SME
แนวโน้มของเทคโนโลยีสารสนเทศ ในอนาคตและการพัฒนา ระบบสารสนเทศเพื่อรองรับ การทำธ...
Global digital statistics 2014
ผลการสำรวจมูลค่าตลาดสื่อสารเเละตลาดคอมพิวเตอร์ฮาร์ดเเวร์ 2555-2557
มูลค่าตลาดคอมพิวเตอร์ฮาร์ดแวร์ ตลาดบริการด้านคอมพิวเตอร์ และตลาดอุปกรณ์เครื่อ...
Asia pacific digital overview 2014
Ad

Similar to Deconstructing the app store rankings formula (20)

PDF
Empirical analysis on iOS app Popularity
DOCX
Discovery of ranking fraud for mobile apps
DOCX
App Store Optimization Tips 101
PDF
Discovery of Ranking Fraud for Mobile Apps
PDF
Discovery of Ranking Fraud for Mobile Apps
PDF
One-Pager: Store Intelligence
PPTX
Exploratory Analysis On Play Store Apps.pptx
PPTX
Exploratory Analysis On Play Store Apps.pptx
PDF
Lorraine Akemann
PDF
Understanding the Relationship Between Paid and Organic Installs Final
PPTX
App Store Optimization - SMX Munich - Emily Grossman
PDF
SEO tips for your Mobile App
DOCX
Discovery of ranking fraud for mobile apps
DOCX
Crm updated report
PDF
How to get 30k+ App Store reviews every month
PDF
MOBILE APP BENCHMARK: TOP 10 MOBILE SHOPPING APPS IN THE US
PDF
Five Reasons why your game is not in the TOP of search in the store.
PDF
Entering new markets in mobile: how to gather insights and succeed
DOCX
Discovery of ranking fraud for mobile apps
DOCX
DISCOVERY OF RANKING FRAUD FOR MOBILE APPS
Empirical analysis on iOS app Popularity
Discovery of ranking fraud for mobile apps
App Store Optimization Tips 101
Discovery of Ranking Fraud for Mobile Apps
Discovery of Ranking Fraud for Mobile Apps
One-Pager: Store Intelligence
Exploratory Analysis On Play Store Apps.pptx
Exploratory Analysis On Play Store Apps.pptx
Lorraine Akemann
Understanding the Relationship Between Paid and Organic Installs Final
App Store Optimization - SMX Munich - Emily Grossman
SEO tips for your Mobile App
Discovery of ranking fraud for mobile apps
Crm updated report
How to get 30k+ App Store reviews every month
MOBILE APP BENCHMARK: TOP 10 MOBILE SHOPPING APPS IN THE US
Five Reasons why your game is not in the TOP of search in the store.
Entering new markets in mobile: how to gather insights and succeed
Discovery of ranking fraud for mobile apps
DISCOVERY OF RANKING FRAUD FOR MOBILE APPS

Recently uploaded (20)

PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
STKI Israel Market Study 2025 version august
PDF
A review of recent deep learning applications in wood surface defect identifi...
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
DOCX
search engine optimization ppt fir known well about this
PDF
Getting Started with Data Integration: FME Form 101
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPTX
Chapter 5: Probability Theory and Statistics
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
Unlock new opportunities with location data.pdf
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PPTX
Modernising the Digital Integration Hub
A novel scalable deep ensemble learning framework for big data classification...
Zenith AI: Advanced Artificial Intelligence
STKI Israel Market Study 2025 version august
A review of recent deep learning applications in wood surface defect identifi...
O2C Customer Invoices to Receipt V15A.pptx
WOOl fibre morphology and structure.pdf for textiles
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
search engine optimization ppt fir known well about this
Getting Started with Data Integration: FME Form 101
Final SEM Unit 1 for mit wpu at pune .pptx
Hindi spoken digit analysis for native and non-native speakers
Taming the Chaos: How to Turn Unstructured Data into Decisions
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Chapter 5: Probability Theory and Statistics
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
Module 1.ppt Iot fundamentals and Architecture
Unlock new opportunities with location data.pdf
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Modernising the Digital Integration Hub

Deconstructing the app store rankings formula

  • 1. Deconstructing the App Store Rankings Formula with 5 Mad Science Experiments After seeing Rand's "Mad Science Experiments in SEO" presented at last year's MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own--not in SEO, but in SEO's up-and-coming younger sister, ASO (app store optimization). Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I've learned a thing or two about app store optimization and what goes into an app's ranking. It's been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way. Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet. As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website's traffic--and revenue. In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive's recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery. Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts. Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings. Until now, that is. Over the course of five experiments analyzing various publicly available data points for a cross- section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I'll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app's ranking. But first, a little context
  • 2. Image credit: Josh Tuininga, Apptentive Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition. Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These experiments, therefore, single out the rare exceptions to the rule--the top 500 ranked apps in each store. While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in: Average app store rating Rating/review volume Download and install counts Uninstalls (what retention and churn look like for the app) App usage statistics (how engaged an app's users are and how frequently they launch the app) Growth trends weighted toward recency (how daily download counts changed over time and how today's ratings compare to last week's) Keyword density of the app's landing page (Ian did a great job covering this factor in a previous Moz post) I've simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our experimentation:
  • 3. Ranking = fn(Rating, Rating Count, Installs, Trends) Of course, right now, this generalized function doesn't say much. Over the next five experiments, however, we'll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings. (For the purpose of brevity, I'll stop here with the assumptions, but I've gone into far greater depth into how I've reached these conclusions in a 55-page report on app store rankings.) Now, for the Mad Science. Experiment #1: App-les to app-les app store ranking volatility The first, and most straight forward of the five experiments involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores. I went with a small sample of five apps for this experiment, the only criteria for which were that: They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores) They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top--an assumption I'll test in experiment #2) They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly They covered a spectrum of app categories The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories. Hypothesis Going into this experiment, I predicted slightly more volatility in Apple App Store rankings, based on two statistics: Both of these assumptions will be tested in later experiments. Results
  • 4. Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play. Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%. Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think. Experiment #2: App store ranking volatility across the top charts
  • 5. Testing the assumption implicit in standardizing the data in experiment No. 1, this experiment was designed to see if app store ranking volatility is correlated with an app's current rank. The sample for this experiment consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1-100 and 401-500). Hypothesis I anticipated rankings to be more volatile the higher an app is ranked--meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors. Results One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they're ranked. This is particularly true when comparing either end of the spectrum--with a seemingly straight volatility line among Google Play's Top 100 apps and very few blips within the App Store's Top 100. Compare this section to the lower end, ranks 401-)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store. To put this into perspective, the average app in Google Play's 401-)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these "lower-ranked" apps are still ranked higher than 99.96% of all apps.)
  • 6. The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation). Experiment #3: App store rankings across the stars The next experiment looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility. Hypothesis Ranking = fn(Rating, Rating Count, Installs, Trends) As discussed in the introduction, this experiment relates directly to one of the factors commonly accepted as influential to app store rankings: average rating. Going into the experiment, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm. As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we'll explore in our last experiment). Results
  • 7. The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it's just one indicator of the complexity of ranking algorithm in Google Play and the App Store. If our hypothesis was correct, we'd see a downward trend in ratings. We'd expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart. A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points: The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401-)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play's algorithm.
  • 8. Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store's calculation of an app's current rating, for which I did not find a statistically significant correlation. Experiment #4: App store rankings across versions This next experiment looks at the relationship between the age of an app's current version, its rank and its ranking volatility. Hypothesis Ranking = fn(Rating, Rating Count, Installs, Trends) In alteration of the above function, I'm using the age of a current app's version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time. Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I'm hypothesizing that the older the age of an app's current version, the lower it will be ranked and the less volatile its rank will be. Results
  • 9. The first and possibly most important finding of this experiment is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app. At the time of conducting the experiment, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old. As hypothesized, the age of the current version is negatively correlated with the app's rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.
  • 10. The next part of the experiment maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%). Experiment #5: App store rankings across monthly active users In the final experiment, I wanted to examine the role of an app's popularity on its ranking. In an ideal world, popularity would be measured by an app's monthly active users (MAUs), but since few mobile app developers have released this information, I've settled for two publicly available proxies: Rating Count and Installs. Hypothesis Ranking = fn(Rating, Rating Count, Installs, Trends) For the same reasons indicated in the second experiment, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app's ranking. Results The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.
  • 11. The average app in Google Play's Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store's Top 100 had 196,000 ratings. In contrast, apps in the 401-)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks. Considering that almost two-thirds of apps don't have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms. To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I've applied a logarithmic scale to the chart above: From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.
  • 12. Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps. And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.) Among the top 100 Android apps, this last experiment found that installs were heavily correlated
  • 13. with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%. Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k-)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs. Summary To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science experiments in app store optimization: Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store's top charts. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play's top charts The higher an app is rated, the less volatile its rankings are The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings An app's update frequency is negatively correlated with Google Play's ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app's most recent ratings and reviews. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps In both stores, apps that fall under the 401-500 ranks receive, on average, 10-20% of the rating volume seen by apps in the top 100 Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29- 40% correlation between the two Revisiting our first (albeit oversimplified) guess at the app stores' ranking algorithm gives us this loosely defined function: Ranking = fn(Rating, Rating Count, Installs, Trends) I'd now re-write the function into a formula by weighing each of these four factors, where a, b, c, d are unknown multipliers, or weights:
  • 14. Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d) These five experiments on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store. It's with the other two factors--rating and trends--that the two stores show the greatest discrepancy. I'd hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app's current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks. Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store: Weight of factors in the Apple App Store ranking algorithm Rating Count Installs Trends Rating Weight of factors in the Google Play ranking algorithm Rating Count Installs Rating Trends Again, we're oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these experiments. I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments. Please share your thoughts in the comments below, and let's deconstruct the ranking formula together, one experiment at a time. https://moz.com/ugc/app-store-rankings-formula-deconstructed-in-5-mad-science-experiments