Mobile Monday Mid-Atlantic: ‘Mobile Applications – Forum and Bootcamp’
In cooperation with Mid-Atlantic Diamond Ventures and Temple University, this event informed on the “state of mobility” and more importantly, the future: trends in platforms and technology. SemperCon’s Rick O’Brien presented and led a discussion on the topic “Mobile Development And Testing – Methodology, Considerations and Lessons Learned”. See our Mobile Testing white paper for more information on this critical mobile topic.
MOBILE DEVELOPMENT AND TESTING
Methodology, Considerations and Lessons Learned
A mobile tsunami has hit the consumer market hard over the last few years as Smartphones now account for over 40% of all phones sold in the United States and consumers worldwide will download an estimated 17.7 billion applications this year, up 117% from the previous year (Gartner Jan. 2011). As smartphone, network and application capabilities continue to grow; more enterprises will begin to move towards adopting mobile applications in search of competitive advantages.
Enterprises will be faced by a variety of choices as they move into the mobile space. Besides choices involving phone manufacturers, operating systems and carriers, they will have to deal with many challenges as they attempt to offer quality mobile applications to their end customers or internal users. There are many variables in the mobile testing environment and it will take a focused approach for enterprises to ensure that their products will work across all available target devices and platforms without busting the QA budget.
SemperCon has its roots in mobile application development and released its first iPhone application back in early 2009. Today we have released dozens of applications on a variety of platforms for a diverse set of clients. Each application has been driven by a different set of requirements and each has resulted in a unique learning experience as new features and functionalities have been integrated, tested and released.
While the best way to minimize the chances of a poor customer experience from an application failing on one device or platform is to test across all possible combinations, this is neither possible nor practical in most situations. In order to maximize the test coverage while minimizing the QA costs, SemperCon has learned that it is important to think through the full development and testing process, focusing on key aspects of the anticipated application and market, in addition to device and service characteristics.
SemperCon has found that it is important to first identify the following application properties and focus markets for the application in question prior to defining the test plan:
- What are the fundamental application features
– Backend requirements: data size, timing requirements
– Phone feature requirements: photos, video, audio, GPS, gyro, etc.
- Who are the key constituents or targeted customers:
– Internal customers: limited device types, central IT control, restrictions on data, sites, usage
– External customers: unlimited devices, user demographics, target markets
Answers to the above questions will act as the database for making decisions later in the testing process. With this Application/Market information in hand, it will be easier to look across the matrix of Device Characteristics to define a representative device testing list that maximizes test coverage while meeting budget objectives.
Manufacturer: While Apple iOS and Android make up about 70% of the market today, it’s still a complicated equation, as Android alone has over sixty different devices in the United States market, each with several carrier versions as well as different OS combinations.
Unlike iPhone and BlackBerry, Android OS devices are produced by multiple manufacturers and are customized by these manufacturers, resulting in differences in UI that can impact application functionality. On-screen and physical controls function differently across devices, even a simple process like sending a text message or opening a web page can be customized to open a third-party application instead of the native Android app.
OS Version: Backwards compatibility and feature support are direct OS related issues. Apple tries hard to minimize OS issues by retaining complete control over the process and achieves adoption rates for new OS releases of roughly 50% after the first month, 70% after 3 months and 90% 6 months. Android on the other hand allows the carriers to control OS introductions and as a result many Android phones don’t get upgraded in their lifetime. Android today supports three major OS releases each representing a significant market share (v2.1~11%, v2.2~45%, 2.3.3~38%), with each supporting different features and requiring special development and testing attention.
Carrier: In the US, Verizon and ATT each owns over 30% of the mobile market with Sprint and T-Mobile at about 12% share each. Verizon/Sprint support CDMA radio technology, while ATT/T-Mobile support GSM. Carrier differences can be as simple as poor local data reception to more advanced features like simultaneous voice and data (yes on ATT, no on Verizon). In our experience there can be significant differences between carriers/technologies when applied to data intensive applications and careful consideration of these issues should be addressed in the test plan.
Memory / CPU: Applications must be able to gracefully handle operations in conditions of lower or insufficient memory or processing power. In addition, applications with heavy graphics, fast response times and/or video may not perform within acceptable standards. As an example, we have recently seen considerable differences in video/voice synchronization performance between the iPad1 vs iPad2 due to CPU differences.
Input Type: Inputs today consist of touch, keyboard and voice. Applications need to be designed to support multiple keyboard configurations. Potential test issues can include missing input keys, pop-up touchscreen keyboards that obscure areas of the application, or touchscreen control bars that are misaligned. Voice is a newer input feature that is supported in most platforms, and which will undoubtedly see greater usage once the new iPhone Siri API is released.
Screen Resolution: Unreadable text, blurred images, misalignment of screen elements and integral buttons, and items that fall off the visible screen are some of the issues that need to be tested across screens of different size and varied pixel densities.
Intangibles that also need to be considered in application design and testing:
Signal Strength/Device location: While signal strength is usually a clear indicator of the level of expected data reception, device location also plays a big part in application performance. Two ‘bars’ of signal strength in the suburbs is not always equal to two bars downtown in a concrete canyon.
Assumed size of data fields: Does the intended user have 100 or 1000 phonebook contacts or Facebook Friends/Fans or Twitter Followers/Followees. Depending on application requirements, these types of factors can have direct effects on application performance and are more difficult to trace or replicate in testing.
Background applications: How many applications is the user running at the time of testing? iOS rules for background apps and logic for allocating CPU capacity have direct and sometimes unexpected negative effects on applications. Geo-location apps also need to deal with permissions and multiple sources of location information depending on available services.
Battery Power level: How much juice is left in the tank and can affect application performance in unique ways. Dying batteries can exhibit very random, unrepeatable bugs typically with negative consequences.
WIFI vs. 3G vs 4G: Most phones offer multiple wireless capabilities and they can be manually or automatically switched per user preferences. Applications and testing needs to account for the devices’ wireless capabilities as related to data throughput and location-based services.
For more information or questions mobile testing, please contact us at email@example.com