Is Twitters plan to log all clicks a privacy loss?

September 2nd, 2010

Twitter’s planned shortening of all links via its t.co service is about to happen. The initial motivation was security, according to Twitter:

“Twitter’s link service at http://t.co is used to better protect users from malicious sites that engage in spreading malware, phishing attacks, and other harmful activity. A link converted by Twitter’s link service is checked against a list of potentially dangerous sites. When there’s a match, users can be warned before they continue.”

Declan McCullagh reports that Twitter announced in an email message that when someone click “on these links from Twitter.com or a Twitter application, Twitter will log that click.” Such information is extremely valuable. Give Twitter’s tens of millions of active users, just knowing how often certain URLs are clicked by people indicates what entities and topics are of interest at the moment.

“Our link service will also be used to measure information like how many times a link has been clicked. Eventually, this information will become an important quality signal for our Resonance algorithm—the way we determine if a Tweet is relevant and interesting.”

Associating the clicks with a user, IP address, location or device can yield even more information — like what you are interested in right now. Moreover, Twitter now has a way to associate arbitrary annotation metadata with each tweet. Analyzing all of this data can identify, for example, communities of users with common interests and the influential members within them.

Note that Twitter has not said it will do this or even that it will record and keep any user-identifiable information along with the clicks. They might just log the aggregate number of clicks in a window of time. But going the next step and capturing the additional information would be, in my mind, irresistible, even if there was no immediate plan to use it.

Search engines like Google already link clicks to users and IP addresses and use the information to improve their ranking algorithms and probably in many other ways. But what is troubling is the seemingly inexorable erosion of our online privacy. There will be no way to opt out of having your link wrapped by the t.co service and no announced way to opt out of having your clicks logged.


Usability determines password policy

August 16th, 2010

Some online sites let you use any old five-character string as your password for as long as you like. Others force you to pick a new password every six months and it has to match a complicated set of requirements — at least eight characters, mixed case, containing digits, letters, punctuation and at least one umlaut. Also, it better not contain any substrings that are legal Scrabble words or match any past password you’ve used since the Bush 41 administration.

A recent paper by two researchers from Microsoft concludes that an organization’s usability requirements is the main factor that determines the complexity of its password policy.

Dinei Florencio and Cormac Herley, Where Do Security Policies Come From?, Symposium on Usable Privacy and Security (SOUPS), 14–16 July 2010, Redmond.

We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength.

We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.

h/t Bruce Schneier


An ontology of social media data for better privacy policies

August 15th, 2010

Privacy continues to be an important topic surrounding social media systems. A big part of the problem is that virtually all of us have a difficult time thinking about what information about us is exposed and to whom and for how long. As UMBC colleague Zeynep Tufekci points out, our intuitions in such matters come from experiences in the physical world, a place whose physics differs considerably from the cyber world.

Bruce Schneier offered a taxonomy of social networking data in a short article in the July/August issue of the IEEE Security & Privacy. A version of the article, A Taxonomy of Social Networking Data, is available on his site.

“Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again — revised — at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.”

I think most of us understand the first two categories and can easily choose or specify a privacy policy to control access to information in them. The rest however, are more difficult to think about and can lead to a lot of confusion when people are setting up their privacy preferences.

As an example, I saw some nice work at the 2010 IEEE International Symposium on Policies for Distributed Systems and Networks on “Collaborative Privacy Policy Authoring in a Social Networking Context” by Ryan Wishart et al. from Imperial college that addressed the problem of incidental data in Facebook. For example, if I post a picture and tag others in it, each of the tagged people can contribute additional policy constraints that can narrow access to it.

Lorrie Cranor gave an invited talk at the workshop on Building a Better Privacy Policy and made the point that even P3P privacy policies are difficult for people to comprehend.

Having a simple ontology for social media data could help us move forward toward better privacy controls for online social media systems. I like Schneier’s broad categories and wonder what a more complete treatment defined using Semantic Web languages might be like.


Apple Safari can expose your private data

July 22nd, 2010

Apple’s Safari browser has a privacy vulnerability allowing web sites you visit to extract your personal information (e.g., name, address, phone number) from your computer’s address book. The fix is to turn off Safari’s web form autofill feature, which is selected by default (Preferences > AutoFill > AutoFill web form).


prefs

It’s an interesting Javascript exploit that does not seem to be a problem for other browsers.


Speed up your Web access with namebench

June 5th, 2010

Here’s a quick trick that could significantly speed up your Web surfing. Download and run the open source namebench on your computer. It does a thorough test of your current DNS servers and some other popular global and regional alternatives, produces a good report and recommends which ones you should use.

Here is how namebench describes what it does:

“namebench looks for the fastest DNS (Domain Name System) servers accessible to your computer. You can think of a DNS server as a phone book: When you want to dial a company on the phone, you may have to flip through a phone book by name to find their phone number. On the Internet, when you want to visit “www.google.com”, a DNS server needs to looks up the correct IP Address for you.

Over the course of loading a single web page, your computer may need to look up a dozen of these addresses. While your Internet provider usually automatically assigns you one of their servers to handle looking up these addresses, there may be others that are significantly faster. namebench finds them.”

Namebench also points out which DNS servers do DNS hijacking — typically by intercepting the error message produced by entering a mistyped URL (e.g., http://umbc.edo/) and redirecting you to a page full of ads and “helpful” search results. Some name severs, like OpenDNS, will also automatically correct some mistyped URLS, e.g., guessing that then you typed http://umbc.edi/ you meant to type http://umbc.edu/. (Shades of DWIM!) It’s not dangerous and is a way private DNS services, like OpenDNS, get revenue to support the service and make a profit.

I have been using OpenDNS because it’s the fastest (for me) and don’t mind their NXDOMAIN hijacking. But I learned from namebench that OpenDNS reroutes www.google.com to google.navigation.opendns.com. That site redirects HTTP GET requests to and then from there onto http://www.google.de/. And Google itself redirects HTTP GET requests for http://google.com/ to http://www.google.com/. I’ll admit I am a bit confused by this. I imagine they do this to capture queries sent to Google, which provide very useful information even in the aggregate. OpenDNS says that they are doing this to correct a problem with Google-specific software installed on Dell computers. They do not seem to be doing this for Microsoft’s Bing search engine, which does lend some credence the claim. I plan on digging into this more to fully understand what is going on and why.

Namebench runs on Macs, Windows and UNIX, and has both a command line and graphical user interface. See the namebench FAQ for more information.


foaf:mbox_sha1sum considered harmful

December 17th, 2009

The foaf:mbox property is very useful since it is ‘inverse functional’ and can thus serve as an ID for a foaf individual. This lets us infer that two foaf profiles with the same mbox refer to the same person.

Since publishing your email address invites spam, many people use the foaf:mbox_sha1sum property instead of mbox. mbox_sha1sum is also inverse functional but doesn’t reveal your private information (i.e., email address).

Abell on developer.it has an interesting post, Gravatars: why publishing your email’s hash is not a good idea, that shows how to crack an MD5 hash of a person’s email address given a little information about the person. (note: The gravitar service supports globally recognized avatars.)

The idea exploits the fact that a few free email services (e.g., gmail, hotmail, yahoo, aol) account for a large fraction of email addresses and using a person’s full name, one can generate likely ‘username’ possibilities. Given an email hash and a persons first and last name, one can generate hashes of likely email addresses until a match is found.

Abell was able (!) to crack 10% of the email addresses for 80,871 stackoverflow.com users in an hour with a simple Haskell program.

The same attack can be used on foaf:mbox_sha1sum properties, especially since a foaf profile will very handily provide the other useful information about the person. Given the extra information available in many foaf profile (e.g., nick, school homepage) one might even expect better results.

As vulnerabilities go, this doesn’t seem like a very dangerous one. The use of mbox_sha1sum is usually justified as a way to avoid having your email address harvested by spambots. I doubt that spammers would think it productive to spend an hour of computing time to get 1000 email addresses.


EU approves law requiring user consent for Web cookies

November 13th, 2009

This ought to be fun.

According to an article in the WSJ, Europe Approves New Cookie Law, “the Council of the European Union has approved new legislation that would require Web users to consent to Internet cookies..”

The law could have broad repercussions for online ads. “Almost every site that carries advertising should be seeking its visitors’ consent to the serving of cookies,” wrote Struan Robertson, a lawyer specializing in technology at Pinsent Masons and editor of Out-Law.com. “It also catches sites that count visitors — so if your site uses Google Analytics or WebTrends, you’re caught.”

This hit slashdot (“Breathtakingly Stupid” EU Cookie Law Passes) this morning.

By the way, our ebiquity site uses cookies. Send mail to no-more-ebiquity-cookies at cs.umbc.edu if you want to opt out.

Hmmmm. I wonder how we would implement cookie opt-out. I think setting a cookie to indicate that the user has opted out of your site’s cookies would be a good approach.


Can cloud computing be entirely trusted?

November 10th, 2009

The Economist has been running a series of online Oxford Union style debates on topical issues — CEO pay, healthcare, climate change, etc. The latest one is on the cloud computing: This house believes that the cloud can’t be entirely trusted.

In his opening remarks, moderator Ludwig Siegele says

“The participants in this debate, including the three guest speakers, all agree that computing is moving into the cloud. “We are experiencing a disruptive moment in the history of technology, with the expansion of the role of the internet and the advent of cloud-based computing”, says Stephen Elop, president of Microsoft’s business division, which generates about a third of the firm’s revenues ($13 billion) and more than half of its profits ($4.5 billion) in the most recent quarter. Marc Benioff, chief executive of Salesforce.com, the world’s largest SaaS provider with over $1.2 billion in sales in the past 12 months, is no less bullish: ‘Like the shift [from the mainframe to the client/server architecture] that roiled our industry in decades past, the transition to cloud computing is happening now because of major discontinuities in cost, value and function.'”

While the debate’s proposition suggests that security or privacy is its focus, it’s really a broader argument about how software services will be delivered in the future in which security is just one aspect.

“Whether and to what extent companies and consumers elect to hand their computing over to others, of course, depends on how much they trust the cloud. And customers still have many questions. How reliable are such services? What about privacy? Don’t I lose too much control? What if Salesforce.com, for instance, changes its service in a way I do not like? Are such web-based services really cheaper than traditional software? And how easy is it to get my data if I want to change providers? Are there open technical standards that would make this easier?”


Dashboard shows data Google has about you

November 5th, 2009

Google added a great new service, Dashboard, that summarizes data stored for a Google account — see MY ACCOUNT>PERSONAL SETTINGS>DASHBOARD.

“Designed to be simple and useful, the Dashboard summarizes data for each product that you use (when signed in to your account) and provides you direct links to control your personal settings. Today, the Dashboard covers more than 20 products and services, including Gmail, Calendar, Docs, Web History, Orkut, YouTube, Picasa, Talk, Reader, Alerts, Latitude and many more. The scale and level of detail of the Dashboard is unprecedented, and we’re delighted to be the first Internet company to offer this — and we hope it will become the standard.”

This is a good move on Google’s part. But while there’s a lot of information included, it’s not everything that Google knows about you — e.g., data in cookies, click throughs data from search results and information from companies it’s acquired, like Doublclick. Still, it is a big step in a positive direction.


Gaydar, Facebook and privacy

October 6th, 2009

In the Fall of 2007, two MIT students carried out a class project exploring how presumably private data could be inferred from an online social networking system. Their experiment was to predict the sexual orientation of Facebook users who make their basic information public by analyzing friendship associations. As reported in the Boston Globe last month, the students’ had not yet published their results.

Well, now they have — in the October issue of the First Monday, “one of the first openly accessible, peer–reviewed journals on the Internet”.

The paper has a lot of detail on the methodology for collecting the data and how it was analyzed. Here’s the abstract.

“Public information about one’s coworkers, friends, family, and acquaintances, as well as one’s associations with them, implicitly reveals private information. Social networking Web sites, e–mail, instant messaging, telephone, and VoIP are all technologies steeped in network data — data relating one person to another. Network data shifts the locus of information control away from individuals, as the individual’s traditional and absolute discretion is replaced by that of his social network. Our research demonstrates a method for accurately predicting the sexual orientation of Facebook users by analyzing friendship associations. After analyzing 4,080 Facebook profiles from the MIT network, we determined that the percentage of a given user’s friends who self–identify as gay male is strongly correlated with the sexual orientation of that user, and we developed a logistic regression classifier with strong predictive power. Although we studied Facebook friendship ties, network data is pervasive in the broader context of computer–mediated communication, raising significant privacy issues for communication technologies to which there are no neat solutions.”

As we had previously noted, this datamining exercise only accesses information that Facebook users explicitly choose to make public. The authors note that their analysis “relies on public self–identification of same–gender interest in Facebook profiles as a sentinel value for LGB identity”. The privacy vulnerability is that the default setting for a Facebook account is that friendship relations are public and you can not control the privacy settings of your friends. So if your leave your friend list public and many of your Facebook friends open up their profiles, it may be possible to draw reasonable inferences about your age, gender, political leanings, sexual preferences and other attributes.


Privacy concerns about new Netflix Prize data

September 22nd, 2009

The New York Times reports that the data for the Netflix Prize 2 will include more information about the anonymous users:

“Netflix was so pleased with the results of its first contest that it announced a second one on Monday. The new contest will present contestants with demographic and behavioral data, including renters’ ages, gender, ZIP codes, genre ratings and previously chosen movies — but not ratings. Contestants will then have to predict which movies those people will like.”

As others have noted this will make it much easier to “de-anonymize” individuals in the collection.

As an experiment, I checked the zip code where I grew up and found that it had about 3900 people in the 2000 census. So, given an age and gender you would have a set of about 40 people. With just a little bit of additional information, one could narrow this to a specific individual.

For example, Narayanan and Shmatikov showed (Robust De-anonymization of Large Sparse Datasets) that this could be done with the dataset from the first Netflix Grand Prize by mining information from IMDB. Think of how much more powerful such attacks would be with the new dataset.


Project Gaydar and privacy in Facebook and other online social networking systems

September 20th, 2009

Today’s Boston Globe has an article on online privacy provocatively titled Project ‘Gaydar’ that leads with a story of an class experiment done by two MIT students on predicting sexual orientation from social network information.

“Using data from the social network Facebook, they made a striking discovery: just by looking at a person’s online friends, they could predict whether the person was gay. They did this with a software program that looked at the gender and sexuality of a person’s friends and, using statistical analysis, made a prediction. The two students had no way of checking all of their predictions, but based on their own knowledge outside the Facebook world, their computer program appeared quite accurate for men, they said.”

I suspect that many will read the article and think that such an analysis can be easily done on their own Facebook information. While I’m not a Facebook expert, I assume that the vast majority of its users employ the default privacy settings which do not allow non-friends to see personal information including gender and the ‘interested in’ attribute, which can be used as a proxy for sexual orientation.

Still, the problem of protecting privacy in online social networking systems is a very real one. The Boston Globe story also mentions work by Murat Kantarcioglu on predicting political affiliations (see Inferring Private Information Using Social Network Data).

“He and a student – who later went to work for Facebook – took 167,000 profiles and 3 million links between people from the Dallas-Fort Worth network. They used three methods to predict a person’s political views. One prediction model used only the details in their profiles. Another used only friendship links. And the third combined the two sets of data. The researchers found that certain traits, such as knowing what groups people belonged to or their favorite music, were quite predictive of political affiliation. But they also found that they did better than a random guess when only using friendship connections. The best results came from combining the two approaches.”

The article also mentions Lise Getoor‘s work on discovering private information by integrating work across Facebook, Flickr, Dogster and BibSonomy (see To Join or not to Join: The Illusion of Privacy in Social Networks with Mixed Public and Private User Profiles).

“Those researchers blinded themselves to the profiles of half the people in each network, and launched a variety of “attacks” on the networks, to see what private information they could glean by simply looking at things like groups people belonged to, and their friendship links. On each network, at least one attack worked. Researchers could predict where Flickr users lived; Facebook users’ gender, a dog’s breed, and whether someone was likely to be a spammer on BibSonomy. The authors found that membership in a group gave away a significant amount of information, but also found that predictions using friend links weren’t as strong as they expected. “Using friends in classifying people has to be treated with care,” computer scientists Lise Getoor and Elena Zheleva wrote.”