Mobile Field Testing Strategy – Part #3
Guest Post by Dean Murphy:
Before continue reading this part of the guide, you may want to head back to the previous chapters:
Execute and Review:
Welcome back…in previous weeks we explained that testing your mobile solutions under lab conditions may cause problems when the solution is piloted or goes live. We looked at the activities you should consider in Analyzing and Planning for field testing.
This week we look at executing your field test plan and reviewing the results. It is important that you consider each day as a single test unit and to conduct contiguous multi-day testing to monitor the accumulated effect of field testing and signal variability, after all a live application will be expected to run for much longer periods of time.
At the start of each test script it is important that the device ID, user ID, date, time, job type, job number (if available) and signal level are recorded. During the execution of the script the tester should record timings for each step of the test script and note the time shown on the device. The tester should also note their observations both good and bad if anything occurs that is unexpected. All this information can be related to the system logs to help isolate any bugs or defects, the same conditions can be re-used to test future releases.
The field testing should be conducted over a time period as close to a typical shift duration as possible and should also follow end-user usage patterns subject to the client’s requirements, e.g.:
- log on, download jobs, open first job to check address before driving to test location
- use the device for phone calls
- use navigation software to route between test sites
- charge the device whilst driving
When the completed test scripts are returned to the office the information should be analyzed as soon as possible so that any queries can be raised while matters are still fresh in the minds of the field testers. Some of this analysis will be used to support the reporting activities.
|Analyze Test Script Sheets||Review the test script sheets and log any defects/bugs.
Look for patterns related to performance degradation.
Look for patterns related to solution instability.
Record number of tests executed.
Record number of defects/bugs found.
Record number of test hours.
Compare metrics to baseline.
|Report Status||Add the data from the test script sheet analysis to the status report and publish at the agreed interval. Highlight progress and findings and compare to previous reports.|
|Review Progress||Review progress against the exit criteria to ascertain and plan the remaining field test iterations.|
About the Author:
Dean Murphy has been working in the field of mobility for 12 years and has broad experience of many mobile implementations from push email to large scale enterprise solutions. Dean is a Solution Consultant with ClickSoftware and helps customers understand how they can solve their business problems through the use of the ClickSoftware Mobility Suite and the ServiceOptimization Suite.