paper: Context Sensitive Access Control in Smart Home Environments

May 30th, 2020

Context Sensitive Access Control in Smart Home Environments


Sofia Dutta, Sai Sree Laya Chukkapalli, Madhura Sulgekar, Swathi Krithivasan, Prajit Kumar Das, and Anupam Joshi, Context Sensitive Access Control in Smart Home Environments, 6th IEEE International Conference on Big Data Security on Cloud, May 2020

The rise in popularity of Internet of Things (IoT) devices has opened doors for privacy and security breaches in Cyber-Physical systems like smart homes, smart vehicles, and smart grids that affect our daily existence. IoT systems are also a source of big data that gets shared via the cloud. IoT systems in a smart home environment have sensitive access control issues since they are deployed in a personal space. The collected data can also be of a highly personal nature. Therefore, it is critical to building access control models that govern who, under what circumstances, can access which sensed data or actuate a physical system. Traditional access control mechanisms are not expressive enough to handle such complex access control needs, warranting the incorporation of new methodologies for privacy and security. In this paper, we propose the creation of the PALS system, that builds upon existing work in an attribute-based access control model, captures physical context collected from sensed data (attributes), and performs dynamic reasoning over these attributes and context-driven policies using Semantic Web technologies to execute access control decisions. Reasoning over user context, details of the information collected by the cloud service provider, and device type our mechanism generates as a consequent access control decisions. Our system’s access control decisions are supplemented by another sub-system that detects intrusions into smart home systems based on both network and behavioral data. The combined approach serves to determine indicators that a smart home system is under attack, as well as limit what data breach such attacks can achieve.


paper: Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

June 4th, 2018

Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

Maithilee Joshi, Karuna Joshi and Tim Finin, Attribute Based Encryption for Secure Access to Cloud Based EHR Systems, IEEE International Conference on Cloud Computing, San Francisco CA, July 2018

 

Medical organizations find it challenging to adopt cloud-based electronic medical records services, due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient centric approach for EHR management where the responsibility of authorizing data access is handled at the patients’ end. This however creates a significant overhead for the patient who has to authorize every access of their health record. This is not practical given the multiple personnel involved in providing care and that at times the patient may not be in a state to provide this authorization. Hence there is a need of developing a proper authorization delegation mechanism for safe, secure and easy cloud-based EHR management. We have developed a novel, centralized, attribute based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHR’s access authority to the medical providers. In this paper, we describe this novel ABE approach as well as the prototype system that we have created to illustrate it.


Link Before You Share: Managing Privacy Policies through Blockchain

March 30th, 2018

Link Before You Share: Managing Privacy Policies through Blockchain

Agniva Banerjee,  UMBC
11:00-12:00 Monday, 2 April 2018

Cloud-based content providers, utilities, and applications, each employ of privacy policies and its associated overhead, it is becoming increasingly difficult for concerned users to manage and track the confidential information that they share with the providers. Users consent to providers to gather and share their Personally Identifiable Information (PII). We have developed a novel framework to ingest a text-based privacy policy document, intelligently parse and extract relevant terms and populate a privacy policy ontology, and thereafter automatically track details about how a user’s PII data is stored, used and shared by the provider. We have integrated this Data Privacy ontology with the properties of blockchain, to develop an automated access-control and audit mechanism that enforces users’ data privacy policies when sharing their data across third parties.

Agniva Banerjee, and Karuna Pande Joshi, Link Before You Share: Managing Privacy Policies through Blockchain, 4th International Workshop on Privacy and Security of Big Data (PSBD 2017), in conjunction with 2017 IEEE International Conference on Big Data, 4 December 2017.

 


Link Before You Share: Managing Privacy Policies through Blockchain

December 4th, 2017

Link Before You Share: Managing Privacy Policies through Blockchain

Agniva Banerjee, and Karuna Pande Joshi, Link Before You Share: Managing Privacy Policies through Blockchain, 4th International Workshop on Privacy and Security of Big Data (PSBD 2017), in conjunction with 2017 IEEE International Conference on Big Data, 4 December 2017.

With the advent of numerous online content providers, utilities and applications, each with their own specific version of privacy policies and its associated overhead, it is becoming increasingly difficult for concerned users to manage and track the confidential information that they share with the providers. We have developed a novel framework to automatically track details about how a user’s PII is stored, used and shared by the provider. We have integrated our data privacy ontology with the properties of blockchain, to develop an automated access-control and audit mechanism that enforces users’ data privacy policies when sharing their data across third parties. We have also validated this framework by implementing a working system LinkShare. In this paper, we describe our framework on detail along with the LinkShare system. Our approach can be adopted by big data users to automatically apply their privacy policy on data operations and track the flow of that data across various stakeholders.


Agniva Banerjee on Managing Privacy Policies through Blockchain

October 16th, 2017

Link before you Share: Managing Privacy Policies through Blockchain

Agniva Banerjee

11:00am Monday, 16 October 2017

An automated access-control and audit mechanism that enforces users’ data privacy policies when sharing their data across third parties, by utilizing privacy policy ontology instances with the properties of blockchain.


Capturing policies for fine-grained access control on mobile devices

November 8th, 2016

In this week’s ebiquity meeting (11:30 8 Nov. 2016) Prajit Das will present his work on capturing policies for fine-grained access control on mobile devices.

As of 2016, there are more mobile devices than humans on earth. Today, mobile devices are a critical part of our lives and often hold sensitive corporate and personal data. As a result, they are a lucrative target for attackers, and managing data privacy and security on mobile devices has become a vital issue. Existing access control mechanisms in most devices are restrictive and inadequate. They do not take into account the context of a device and its user when making decisions. In many cases, the access granted to a subject should change based on context of a device. Such fine-grained, context-sensitive access control policies have to be personalized too. In this paper, we present the Mithril system, that uses policies represented in Semantic Web technologies and captured using user feedback, to handle access control on mobile devices. We present an iterative feedback process to capture user specific policy. We also present a policy violation metric that allows us to decide when the capture process is complete.

Context-Sensitive Policy Based Security in Internet of Things

April 18th, 2016

Prajit Kumar Das, Sandeep Nair, Nitin Kumar Sharma, Anupam Joshi, Karuna Pande Joshi, and Tim Finin, Context-Sensitive Policy Based Security in Internet of Things, 1st IEEE Workshop on Smart Service Systems, co-located with IEEE Int. Conf. on Smart Computing, St. Louis, 18 May 2016.

According to recent media reports, there has been a surge in the number of devices that are being connected to the Internet. The Internet of Things (IoT), also referred to as Cyber-Physical Systems, is a collection of physical entities with computational and communication capabilities. The storage and computing power of these devices is often limited and their designs currently focus on ensuring functionality and largely ignore other requirements, including security and privacy concerns. We present the design of a framework that allows IoT devices to capture, represent, reason with, and enforce information sharing policies. We use Semantic Web technologies to represent the policies, the information to be shared or protected, and the IoT device context. We discuss use-cases where our design will help in creating an “intelligent” IoT device and ensuring data security and privacy using context-sensitive information sharing policies.


Knowledge Extraction from Cloud Service Level Agreements

November 1st, 2015

Sudip Mittal, Karuna Pande Joshi, Claudia Pearce, and Anupam Joshi, Parallelizing Natural Language Techniques for Knowledge Extraction from Cloud Service Level Agreements, IEEE International Conference on Big Data, October, 2015.

To efficiently utilize their cloud based services, consumers have to continuously monitor and manage the Service Level Agreements (SLA) that define the service performance measures. Currently this is still a time and labor intensive process since the SLAs are primarily stored as text documents. We have significantly automated the process of extracting, managing and monitoring cloud SLAs using natural language processing techniques and Semantic Web technologies. In this paper we describe our prototype system that uses a Hadoop cluster to extract knowledge from unstructured legal text documents. For this prototype we have considered publicly available SLA/terms of service documents of various cloud providers. We use established natural language processing techniques in parallel to speed up cloud legal knowledge base creation. Our system considerably speeds up knowledge base creation and can also be used in other domains that have unstructured data.


Semantics for Privacy and Shared Context

December 15th, 2014

Roberto Yus, Primal Pappachan, Prajit Das, Tim Finin, Anupam Joshi, and Eduardo Mena, Semantics for Privacy and Shared Context, Workshop on Society, Privacy and the Semantic Web-Policy and Technology, held at Int. Semantic Web Conf., Oct. 2014.

Capturing, maintaining, and using context information helps mobile applications provide better services and generates data useful in specifying information sharing policies. Obtaining the full benefit of context information requires a rich and expressive representation that is grounded in shared semantic models. We summarize some of our past work on representing and using context models and briefly describe Triveni, a system for cross-device context discovery and enrichment. Triveni represents context in RDF and OWL and reasons over context models to infer additional information and detect and resolve ambiguities and inconsistencies. A unique feature, its ability to create and manage “contextual groups” of users in an environment, enables their members to share context information using wireless ad-hoc networks. Thus, it enriches the information about a user’s context by creating mobile ad hoc knowledge networks.


Do not be a Gl***hole, use Face-Block.me!

March 27th, 2014

If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.

FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.

Comfort at the cost of Privacy?

As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.

FaceBlock_Image_Google_Glass

Privacy-Aware pictures to the rescue

FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.

Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.

FaceBlock_Image_Eigen_UncheckFaceBlock_Image_Eigen_CheckFaceBlock_Image_Blur

What promises does it hold?

FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.

FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me


Usability determines password policy

August 16th, 2010

Some online sites let you use any old five-character string as your password for as long as you like. Others force you to pick a new password every six months and it has to match a complicated set of requirements — at least eight characters, mixed case, containing digits, letters, punctuation and at least one umlaut. Also, it better not contain any substrings that are legal Scrabble words or match any past password you’ve used since the Bush 41 administration.

A recent paper by two researchers from Microsoft concludes that an organization’s usability requirements is the main factor that determines the complexity of its password policy.

Dinei Florencio and Cormac Herley, Where Do Security Policies Come From?, Symposium on Usable Privacy and Security (SOUPS), 14–16 July 2010, Redmond.

We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength.

We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.

h/t Bruce Schneier


An ontology of social media data for better privacy policies

August 15th, 2010

Privacy continues to be an important topic surrounding social media systems. A big part of the problem is that virtually all of us have a difficult time thinking about what information about us is exposed and to whom and for how long. As UMBC colleague Zeynep Tufekci points out, our intuitions in such matters come from experiences in the physical world, a place whose physics differs considerably from the cyber world.

Bruce Schneier offered a taxonomy of social networking data in a short article in the July/August issue of the IEEE Security & Privacy. A version of the article, A Taxonomy of Social Networking Data, is available on his site.

“Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again — revised — at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.”

I think most of us understand the first two categories and can easily choose or specify a privacy policy to control access to information in them. The rest however, are more difficult to think about and can lead to a lot of confusion when people are setting up their privacy preferences.

As an example, I saw some nice work at the 2010 IEEE International Symposium on Policies for Distributed Systems and Networks on “Collaborative Privacy Policy Authoring in a Social Networking Context” by Ryan Wishart et al. from Imperial college that addressed the problem of incidental data in Facebook. For example, if I post a picture and tag others in it, each of the tagged people can contribute additional policy constraints that can narrow access to it.

Lorrie Cranor gave an invited talk at the workshop on Building a Better Privacy Policy and made the point that even P3P privacy policies are difficult for people to comprehend.

Having a simple ontology for social media data could help us move forward toward better privacy controls for online social media systems. I like Schneier’s broad categories and wonder what a more complete treatment defined using Semantic Web languages might be like.