UMBC ebiquity
Policy

Archive for the 'Policy' Category

Do not be a Gl***hole, use Face-Block.me!

March 27th, 2014, by Prajit Kumar Das, posted in Ebiquity, Google, Mobile Computing, Policy, Semantic Web, Social, Wearable Computing

If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.

FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.

Comfort at the cost of Privacy?

As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.

FaceBlock_Image_Google_Glass

Privacy-Aware pictures to the rescue

FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.

Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.

FaceBlock_Image_Eigen_UncheckFaceBlock_Image_Eigen_CheckFaceBlock_Image_Blur

What promises does it hold?

FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.

FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me

Usability determines password policy

August 16th, 2010, by Tim Finin, posted in Policy, Privacy, Security, Social media

Some online sites let you use any old five-character string as your password for as long as you like. Others force you to pick a new password every six months and it has to match a complicated set of requirements — at least eight characters, mixed case, containing digits, letters, punctuation and at least one umlaut. Also, it better not contain any substrings that are legal Scrabble words or match any past password you’ve used since the Bush 41 administration.

A recent paper by two researchers from Microsoft concludes that an organization’s usability requirements is the main factor that determines the complexity of its password policy.

Dinei Florencio and Cormac Herley, Where Do Security Policies Come From?, Symposium on Usable Privacy and Security (SOUPS), 14–16 July 2010, Redmond.

We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength.

We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.

h/t Bruce Schneier

An ontology of social media data for better privacy policies

August 15th, 2010, by Tim Finin, posted in Policy, Privacy, Security, Semantic Web, Social media

Privacy continues to be an important topic surrounding social media systems. A big part of the problem is that virtually all of us have a difficult time thinking about what information about us is exposed and to whom and for how long. As UMBC colleague Zeynep Tufekci points out, our intuitions in such matters come from experiences in the physical world, a place whose physics differs considerably from the cyber world.

Bruce Schneier offered a taxonomy of social networking data in a short article in the July/August issue of the IEEE Security & Privacy. A version of the article, A Taxonomy of Social Networking Data, is available on his site.

“Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again — revised — at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.”

I think most of us understand the first two categories and can easily choose or specify a privacy policy to control access to information in them. The rest however, are more difficult to think about and can lead to a lot of confusion when people are setting up their privacy preferences.

As an example, I saw some nice work at the 2010 IEEE International Symposium on Policies for Distributed Systems and Networks on “Collaborative Privacy Policy Authoring in a Social Networking Context” by Ryan Wishart et al. from Imperial college that addressed the problem of incidental data in Facebook. For example, if I post a picture and tag others in it, each of the tagged people can contribute additional policy constraints that can narrow access to it.

Lorrie Cranor gave an invited talk at the workshop on Building a Better Privacy Policy and made the point that even P3P privacy policies are difficult for people to comprehend.

Having a simple ontology for social media data could help us move forward toward better privacy controls for online social media systems. I like Schneier’s broad categories and wonder what a more complete treatment defined using Semantic Web languages might be like.

Senate plan: less stimulus for NSF, NIST, other science agencies

February 9th, 2009, by Tim Finin, posted in GENERAL, Policy

The US Senate’s stimulus plan released at the end of last week has less money for US science agencies than the House plan from January, but the cuts were not as drastic as were feared. CRA reports in a post Senate Deal Protects Much of NSF Increase in Stimulus that

“The agreement does reduce the increase in the Department of Energy’s Office of Science by $100 million (so, +$330 million instead of +$430 million), and NIST’s increase would be reduced by $100 million (so +$495 million instead of +$595 million). But given the reports we were receiving as recently as yesterday evening about the possibility of no increase for the science agencies in the bill, this is a remarkable turn of events. The increase for NSF in the Senate bill will still be far less than the $3 billion called for in the House version of the bill, but NSF will be in far better shape in the conference between the two chambers coming in with $1.2 billion from the Senate instead of zero.”

Scientists and Engineers for America (a 501(c)(3) organization) has a detailed breakdown of the the stimulus package that passed the Senate Friday in Senate-passed stimulus package by the numbers. They also have a downloadable excel spreadsheet in case you want to crunch the data yourself. Here are some science highlights from their post:

NSF Research: $1.2 billion total for NSF including: $1 billion to help America compete globally; $150 million for scientific infrastructure; and $50 million for competitive grants to improve the quality of science, technology, engineering, and mathematics (STEM) education.

NASA: $1.3 billion total for NASA including: $450 million for Earth science missions to provide critical data about the Earth’s resources and climate; $200 million to enable research and testing of environmentally responsible aircraft and for verification and validation methods for complex aerospace systems and software; $450 million to reduce the gap in time that the U.S. does not have a vehicle to access the International Space Station; and $200 million for repair, upgrade and construction at NASA facilities.

NOAA: $1 billion total for NOAA, including $645 million to construct and repair NOAA facilities, equipment and vessels to reduce the Nation’s coastal charting backlog, upgrade supercomputer infrastructure for climate research, and restore critical habitat around the Nation.

NIST: $475 million total for NIST including: $307 million for renovation of NIST facilities and new laboratories using green technologies; $168 million for scientific and technical research at NIST to strengthen the agency’s IT infrastructure; provide additional NIST research fellowships; provide substantial funding for advanced research and measurement equipment and supplies; increase external grants for NIST-related research.

DOE: The Department of Energy’s Science program sees $330 million for laboratory infrastructure and construction.

JWS special issue on Semantic Web and Policy (free sample issue)

January 13th, 2009, by Tim Finin, posted in Policy, Semantic Web

Elsevier has made the January 2009 Journal of Web Semantics special issue on the Semantic Web and Policy our new sample issue, which means that its paper are freely available online until a new sample issue is selected. The special issue editors, Lalana Kagal, Tim Berners-Lee and James Hendler wrote in the introduction:

“As Semantic Web technologies mature and become more accepted by researchers and developers alike, the widespread growth of the Semantic Web seems inevitable. However, this growth is currently hampered by the lack of well-defined security protocols and specifications. Though the Web does include fairly robust security mechanisms, they do not translate appropriately to the Semantic Web as they do not support autonomous machine access to data and resources and usually require some kind of human input. Also, the ease of retrieval and aggregation of distributed information made possible by the Semantic Web raises privacy questions as it is not always possible to prevent misuse of sensitive information. In order to realize it’s full potential as a powerful distributed model for publishing, utilizing, and extending information, it is important to develop security and privacy mechanisms for the Semantic Web. Policy frameworks built around machine-understandable policy languages, with their promise of flexibility, expressivity and automatable enforcement appear to be the obvious choice.

It is clear that these two technologies – Semantic Web and Policy – complement each other and together will give rise to security infrastructures that provide more flexible management, are able to accommodate heterogeneous information, have improved communication, and are able to dynamically adapt to variations in the environment. These infrastructures could be used for a wide spectrum of applications ranging from network management, quality of information, to security, privacy and trust. This special issue of the Journal of Web Semantics is focused on the impact of Semantic Web technologies on policy management, and the specification, analysis and application of these Semantic Web-based policy frameworks.”

In addition to the editors’ Introduction, the special issue includes five papers:

Guess who is coming to grad school!

October 2nd, 2008, by Tim Finin, posted in Humor, Policy, Semantic Web

UMBC alumnus Alark Joshi (PhD 2007) pointed out this great comic yesterday on Jorge Cham’s Phdcomics site. It shows one upside to the current financial crisis. Now that might sound self-serving, since I am part of the higher education industry that stands to profit. I think our society benefits as a whole if more people pursue an advanced degree, especially if the alternative is to become a yet another hedge fund manager.



Five Cloud Computers and Information Sharing

July 28th, 2008, by Anupam Joshi, posted in cloud computing, GENERAL, Policy, Privacy, Security

There is an interesting panel to open the Microsoft faculty research summit featuring Rick Rashid, Daniel Reed, Ed Felten, Howard Schmidt, and Elizabeth Lawley. Lots of interesting ideas, but one that got thrown out was the recent idea that maybe the world does only need five (cloud) computers. If something like this really does happen, then perhaps we’ll need to think even more aggressively about the information sharing issues — is there some way for me to make sure that I only share with (say) Google’s cloud the things that are absolutely needed. Once I have given some information to Google, can I still retain some control over it. Who owns this information now? If I do, how do I know that Google will honor whatever commitments it makes about how it will use or further share that information ? We’ll be exploring some of these questions in our “Assured Information Sharing” Research. Some of the auditing work that MIT’s DIG group has done also ties in .

Our MURI grant gets some press

June 12th, 2008, by Anupam Joshi, posted in Datamining, Mobile Computing, Policy, Privacy, Security, Social media, Technology Policy, UMBC

A UMBC led team recently won a MURI award from DoD to work on “Assured Information Sharing Lifecycle”. It is an interesting mix of work on  new security models, policy driven security systems, context awareness, privacy preserving data mining, and social networking. The award really brings together many different strains of research in eBiquity, as well as some related reserach in our department. We’re just starting off, and excited about it. UMBC’s web page had a story about this, and more recently, GCN covered it.

The UMBC team is lead by Tim Finin, and includes several of us. The other participants are UIUC (led by Jiawei Han), Purdue (led by Elisa Bertino),  UTSA (led by Ravi Sandhu), UTDallas (led by Bhavani Thurasingham), Michigan (Lada Adamic).

Borjas at UMBC

October 11th, 2007, by Anupam Joshi, posted in CS, GENERAL, Policy, UMBC

The well known Labor Economist visited UMBC last week to give a lecture in our humanities series. Borjas is very well known in political circles for his economic analysis of immigration. More importantly, not only does he write scholarly papers, he actually blogs in a way that folks like me who haven’t even done ECON 101 can understand his points. I haven’t read any of his papers to see what they look like, but in his blogs he is fairly clear about his opinions on various issues related to immigration. See for instance this interesting post about “protectionism” on broadway! I don’t always agree with what he has to say, but it is always a pleasure to read well written posts that say something reasonable backed with some data and analytic rigor.

So I went to the lecture with great anticipation. I reached a few minutes late, and the room was already full. The presentation itself was good, but a bit of a letdown. Perhaps because he didn’t want to be too controversial in a “distinguished lecture” type setting ? He presented data (increase in immigration since 1964, concentration of that immigration in select areas making the effect local, confounding factors when you try to analyze wage effects of immigrants, the fact that the wage depressing effects of immigration have most hurt the lower strata of society, the fact that an average immigrant today earns less than the native born, which is a change from the 60s and so on). However, he didn’t go much further by saying something which is both true and a copout — namely that what policy implications you derive from this data will depend on what your objective function is. He joked about letting everyone in if the goal was to alleviate world poverty or somesuch.

I also noticed that he did not split his data into effects of legal and illegal immigration. It would be interesting to know if there are differences ? Amongst legal immigrants, does employment based versus family based immigration make a difference ? Especially when one of the things that the now dead “comprehensive immigration reform” bill was discussing was a points based system for immigration.

StopBadware campaign

January 27th, 2006, by Amit, posted in Policy, Programming, Security

A good read at http://stopbadware.org, it seems to be a MEGA campaign by Google, Levono and Sun Microsystems.

“Several academic institutions and major tech companies have teamed up to thwart ‘badware’, a phrase they have coined that encompasses spyware and adware. The new website, StopBadware.org, is promoted as a “Neighborhood Watch” campaign and seeks to provide reliable, objective information about downloadable applications in order to help consumers to make better choices about what they download on to their computers. We want to work with both experts and the broader internet community (.orgs and .edus) to define and understand the problem.”

Models of trust for the Web

November 23rd, 2005, by Tim Finin, posted in Conferences, Policy, Security, Semantic Web, Web

The Workshop on Models of Trust for the Web (MTW’06) will be a one-day workshop held on May 22 or 23, 2006 in Edinburgh in conjunction with the 15th International World Wide Web Conference. Tentative deadlines are January 10 for paper submission and February 1 for acceptance notification.

“There are three types of lies – lies, damn lies, and facts found on the Web.” — anon

“As it gets easier to add information to the web via html pages, wikis, blogs, and other documents, it gets tougher to distinguish accurate information from inaccurate or untrustworthy information. A search engine query usually results in several hits that are outdated and/or from unreliable sources and the user is forced to go through the results and pick what she/he considers the most reliable information based on her/his trust requirements. With the introduction of web services, the problem is further exacerbated as users have to come up with a new set of requirements for trusting web services and web services themselves require a more automated way of trusting each other. Apart from inaccurate or outdated information, we also need to anticipate Semantic Web Spam (SWAM) — where spammers publish false facts and scams to deliberately mislead users. This workshop is interested in all aspects of enabling trust on the web.”

Semantic Web and Policy Workshop wrap up

November 16th, 2005, by Tim Finin, posted in Policy, Security, Semantic Web

The Semantic Web and Policy Workshop (SWPW) held at ISWC had some great presentations and discussions on policy-based frameworks for security, privacy, trust, information filtering, accountability, etc. The SWPW web site has the proceedings, papers, presentations and some pictures. Watch for announcements about a related workshop on Models of Trust for the Web that will be held at WWW2006.

You are currently browsing the archives for the Policy category.

  Home | Archive | Login | Feed