Information Design in An Aged Care Context (PervasiveHealth ’19)

AbstractThe adoption of technological solutions for aged care is rapidly increasing in developed countries. New technologies facilitate the sharing of health information among the “care triad”: the elderly care recipient, their family, and care staff. In order to develop user-centered technologies for this population, we believe that it is necessary to first examine their views about the sharing of health and well-being information (HWBI). Through in-depth semi-structured interviews with 12…

Uninformed but Unconcerned: Privacy Attitudes of Smart Speaker Users (PETS ’19)

AbstractAs devices with always-on microphones located in people’s homes, smart speakers have significant privacy implications. We surveyed smart speaker owners about their beliefs, attitudes, and concerns about the recordings that are made and shared by their devices. To ground participants’ responses in concrete interactions, rather than collecting their opinions abstractly, we framed our survey around randomly selected recordings of saved interactions with their devices. We surveyed 116 owners of Amazon…

50 Ways to Leak Your Data: An Exploration of Apps’ Circumvention of the Android Permissions System (USENIX Sec ’19)

Abstract Modern smartphone platforms implement permission-based models to protect access to sensitive data and system resources. However, apps can circumvent the permission model and gain access to protected data without user consent by using both covert and side channels. Side channels present in the implementation of the permission system allow apps to access the data without permission; whereas covert channels enable communication between two colluding apps so that one app…

On The Ridiculousness of Notice and Consent: Contradictions in App Privacy Policies (ConPro ’19)

AbstractThe dominant privacy framework of the information age relies on notions of “notice and consent.” That is, service providers will disclose, often through privacy policies, their data collection practices, and users can then consent to their terms. However, it is unlikely that most users comprehend these disclosures, which is due in no small part to ambiguous, deceptive, and misleading statements. By comparing actual collection and sharing practices to disclosures in…

Do You Get What You Pay For? Comparing The Privacy Behaviors of Free vs. Paid Apps (ConPro ’19)

AbstractIt is commonly assumed that the availability of “free” mobile apps comes at the cost of consumer privacy, and that paying for apps could offer consumers protection from behavioral advertising and long-term tracking. This work empirically evaluates the validity of this assumption by investigating the degree to which “free” apps and their paid premium versions differ in their bundled code, their declared permissions, and their data collection behaviors and privacy…

A Promise Is A Promise: The Effect Of Commitment Devices On Computer Security Intentions (CHI ’19)

AbstractCommitment devices are a technique from behavioral economics that have been shown to mitigate the effects of present bias—the tendency to discount future risks and gains in favor of immediate gratifications. In this paper, we explore the feasibility of using commitment devices to nudge users towards complying with varying online security mitigations. Using two online experiments, with over 1,000 participants total, we offered participants the option to be reminded or…

The Accuracy of the Demographic Inferences Shown on Google’s Ad Settings (WPES ’18)

AbstractGoogle’s Ad Settings shows the gender and age that Google hasinferred about a web user. We compare the inferred values to theself-reported values of 501 survey participants. We find that Googleoften does not show an inference, but when it does, it is typicallycorrect. We explore which usage characteristics, such as using privacyenhancing technologies, are associated with Google’s accuracy,but found no significant results. CitationMichael Carl Tschantz, Serge Egelman, Jaeyoung Choi, Nicholas…

Better Late(r) than Never: Increasing Cyber-Security Compliance by Reducing Present Bias (WEIS ’18)

Abstract Despite recent advances in increasing computer security by eliminating human involvement and error, there are still situations in which humans must manually perform computer security tasks, such as enabling automatic updates, rebooting machines to apply some of those updates, or enrolling in two-factor authentication. We argue that present bias—the tendency to discount future risks and gains in favor of immediate gratifications—could be the root cause explaining why many users…

“What Can’t Data Be Used For?” Privacy Expectations about Smart TVs in the U.S. (EuroUSEC ’18)

Abstract Smart TVs have rapidly become the most common smart appliance in typical households. In the U.S., most television sets on the market have advanced sensors not traditionally found on conventional TVs, such as a microphone for voice commands or a camera for photo or video input. These new sensors enable features that are convenient, but they may also introduce new privacy implications. We surveyed 591 U.S. Internet users about…

“Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale (PETS ’18)

Abstract We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in…