*If the form isn’t loading or submitting properly, please 1) disable ad blockers, 2) enable cookies, and 3) turn off any tracking protection features. Still can’t see the form? Reach out to firstname.lastname@example.org.
Stability is the measurement of application health and user experience. It is usually calculated in two ways to provide the complete picture － as a percentage of application sessions that are crash-free or did not result in an unhandled exception and as a percentage of daily active users who do not experience an error.
Stability Score =
crashed user sessions
total user sessions
Providing an error-free experience to your customers is of utmost importance to drive conversion, engagement, and retention. Although stability is commonly a KPI owned by engineering organizations, it has a significant impact on overall business performance and growth.
To help you understand how your application compares, BugSnag has compiled data on the session-based stability of leading mobile and web applications from various market segments, including eCommerce, media and entertainment, financial services, logistics, gaming, and more.
84% of users abandon an application after seeing two crashes¹
This data can also help you determine application stability SLAs and SLOs, which are similar to the “five nines” that infrastructure and operational teams use to measure uptime and availability. This metrics-driven approach can help development teams make decisions about when to build features vs. fix bugs based on the application’s current stability.
SLO ⇢ TARGET STABILITY
SLA ⇢ CRITICAL STABILITY
The data includes applications from several mobile development platforms, including Android, iOS, React Native, and Unity. Stability scores are negatively impacted by session-ending events, which include things like crashes as well as ANRs (Application Not Responding) in Android, React Native, and Unity applications and OOMs (Out of Memory) in iOS applications.
Almost one in every 250 customers could be having a completely broken experience with the application.
Compared to the “five nines” which are used to set goals for uptime and availability, a median stability of 99.63% presents a big opportunity for engineering organizations to invest in measuring and improving application stability and customer experience.
Android and iOS native applications tend to have a high median stability because there are very specialized developers working on these applications who have the expertise required to understand and address any stability issues effectively.
Android applications tend to have a slightly lower median stability compared to iOS applications because Android presents a much less constrained development environment. Increased fragmentation of Android devices makes it more difficult to test applications whereas iOS development teams only need to provide a stable experience on a limited number of devices that Apple releases every year.
Due to the sandboxed environment of Unity applications, bugs that crash the entire application are much less likely to occur. It is possible that errors within frames are disrupting the gaming experience, but they don’t result in a full on crash, which may be why Unity applications boast the highest median stability and the narrowest range.
The data includes applications from several front end development platforms, including Angular, Backbone, Ember, React, and Vue. Causes of unhandled exceptions in web applications can include a bug which prevents the entire page from rendering, an event handler bug which causes the user interaction to fail, an unhandled promise rejection warning, and more.
Backbone is an older and less opinionated web development framework. Development teams don’t have access to the same coding guidelines, best practices, and considerations for error handling that the other more recent development frameworks offer, which may explain the lower median stability and wider range for Backbone applications.
The stability of each application is monitored across browsers, meaning customers experiencing errors in Internet Explorer, Google Chrome, Mozilla Firefox, and other browsers are included in the overall application stability score. Errors caused by browser extensions are also included; however, most engineering organizations don’t spend resources to investigate and fix these errors since their code is not the culprit.
In order to understand if the size of an engineering team has an impact on application stability, we looked at the stability of applications supported by a few different engineering team sizes.
The size of an engineering team tends to be linear to the age of the organization. Younger organizations prioritize product-market fit and need to release new features quickly. Maintaining a high application stability is less important than capturing market share and increasing competitive advantage. This may explain why the smaller the engineering team, the lower the median stability.
Larger engineering teams are usually working on more mature applications but standardizing on stability can become increasingly challenging. Legacy code, compounding technical debt, and complexity of team structure are just a few of the hurdles that can make maintaining a higher stability more difficult. This may explain why engineering organizations with more than 100 engineers have a lower median stability and a wider range.