Building A Better Monkey: Intelligently Exploring App Code Paths

Our Android testbed relies on randomly creating UI events in apps: random taps, swipes, etc. The limitation of this is that it often misses a lot of code paths. Meanwhile, existing research on fuzzing has found that this process can be improved through static analysis and machine learning. Similarly, the “monkey” gets stuck when it encounters text entry fields.


Research goal: Build a better automatic tester by performing static analysis to detect how certain UI elements may correspond to different code paths, so that during execution, we can detect when these UI elements are shown and therefore teach the monkey to interact with them. Relatedly, we should use machine learning to automatically detect when a text entry field is on the screen, and then ideally predict what type of data it requires. I would also like to automatically detect when privacy disclosures are shown on the screen, so that we can automatically recognize them and screenshot them.

Potential studies:

  • Measure privacy behaviors (e.g., transmissions to different domains) to quantify which technique (e.g., old vs. new monkey) uncovers more bad behaviors.
  • Recognize in-app disclosures (e.g., popup notices about privacy) and determine whether they match observed app behavior.