The Accuracy of the Demographic Inferences Shown on Google’s Ad Settings (WPES ’18)

AbstractGoogle’s Ad Settings shows the gender and age that Google hasinferred about a web user. We compare the inferred values to theself-reported values of 501 survey participants. We find that Googleoften does not show an inference, but when it does, it is typicallycorrect. We explore which usage characteristics, such as using privacyenhancing technologies, are associated with Google’s accuracy,but found no significant results. CitationMichael Carl Tschantz, Serge Egelman, Jaeyoung Choi, Nicholas…

Better Late(r) than Never: Increasing Cyber-Security Compliance by Reducing Present Bias (WEIS ’18)

Abstract Despite recent advances in increasing computer security by eliminating human involvement and error, there are still situations in which humans must manually perform computer security tasks, such as enabling automatic updates, rebooting machines to apply some of those updates, or enrolling in two-factor authentication. We argue that present bias—the tendency to discount future risks and gains in favor of immediate gratifications—could be the root cause explaining why many users…

“What Can’t Data Be Used For?” Privacy Expectations about Smart TVs in the U.S. (EuroUSEC ’18)

Abstract Smart TVs have rapidly become the most common smart appliance in typical households. In the U.S., most television sets on the market have advanced sensors not traditionally found on conventional TVs, such as a microphone for voice commands or a camera for photo or video input. These new sensors enable features that are convenient, but they may also introduce new privacy implications. We surveyed 591 U.S. Internet users about…

“Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale (PETS ’18)

Abstract We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in…

Contextualizing Privacy Decisions for Better Prediction (and Protection) (CHI ’18)

Abstract Modern mobile operating systems implement an ask-on-first-use policy to regulate applications’ access to private user data: the user is prompted to allow or deny access to a sensitive resource the first time an app attempts to use it. Prior research shows that this model may not adequately capture user privacy preferences because subsequent requests may occur under varying contexts. To address this shortcoming, we implemented a novel privacy management…

An Experience Sampling Study of User Reactions to Browser Warnings in the Field (CHI ’18)

Abstract Web browser warnings should help protect people from malware, phishing, and network attacks. Adhering to warnings keeps people safer online. Recent improvements in warning design have raised adherence rates, but they could still be higher. And prior work suggests many people still do not understand them. Thus, two challenges remain: increasing both comprehension and adherence rates. To dig deeper into user decision making and comprehension of warnings, we performed…

A Usability Evaluation of Tor Launcher (PETS ’17)

Abstract Although Tor has state-of-the art anti-censorship measures, users in heavily censored environments will likely not be able to connect to Tor because they cannot make the correct decisions during the configuration process. We perform the first usability evaluation of Tor Launcher, the graphical user interface (GUI) that Tor Browser uses to configure connections to Tor. Our study shows that 79% (363 of 458) of user attempts to connect to…

Let’s Go in for a Closer Look: Observing Passwords in Their Natural Habitat (CCS ’17)

Abstract Text passwords—a frequent vector for account compromise, yet still ubiquitous—have been studied for decades by researchers attempting to determine how to coerce users to create passwords that are hard for attackers to guess but still easy for users to type and memorize. Most studies examine one password or a small number of passwords per user, and studies often rely on passwords created solely for the purpose of the study…

TurtleGuard: Helping Android Users Apply Contextual Privacy Preferences (SOUPS ’17)

Abstract Current mobile platforms provide privacy management interfaces to regulate how applications access sensitive data. Prior research has shown how these interfaces are insufficient from a usability standpoint: they do not account for context. In allowing for more contextual decisions, machine-learning techniques have shown great promise for designing systems that automatically make privacy decisions on behalf of the user. However, if such decisions are made automatically, then feedback mechanisms are…

The Feasibility of Dynamically Granted Permissions: Aligning Mobile Privacy with User Preferences (Oakland ’17)

Abstract Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions…