talk: Design and Implementation of an Attribute Based Access Controller using OpenStack Services

September 23rd, 2018

Design and Implementation of an Attribute Based Access Controller using OpenStack Services

Sharad Dixit, Graduate Student, UMBC
10:30am Monday, 24 September 2018, ITE346

With the advent of cloud computing, industries began a paradigm shift from the traditional way of computing towards cloud computing as it fulfilled organizations present requirements such as on-demand resource allocation, lower capital expenditure, scalability and flexibility but with that it brought a variety of security and user data breach issues. To solve the issues of user data and security breach, organizations have started to implement hybrid cloud where underlying cloud infrastructure is set by the organization and is accessible from anywhere around the world because of the distinguishable security edges provided by it. However, most of the cloud platforms provide a Role Based Access Controller which does not adequate for complex organizational structures. A novel mechanism is proposed using OpenStack services and semantic web technologies to develop a module which evaluates user’s and project’s multi-varied attributes and run them against access policy rules defined by an organization before granting the access to the user. Henceforth, an organization can deploy our module to obtain a robust and trustworthy access control based on multiple attributes of a user and the project the user has requested in a hybrid cloud platform like OpenStack.


Managing Cloud Storage Obliviously

May 24th, 2016

Vaishali Narkhede, Karuna Pande Joshi, Tim Finin, Seung Geol Choi, Adam Aviv and Daniel S. Roche, Managing Cloud Storage Obliviously, International Conference on Cloud Computing, IEEE Computer Society, June 2016.

Consumers want to ensure that their enterprise data is stored securely and obliviously on the cloud, such that the data objects or their access patterns are not revealed to anyone, including the cloud provider, in the public cloud environment. We have created a detailed ontology describing the oblivious cloud storage models and role based access controls that should be in place to manage this risk. We have developed an algorithm to store cloud data using oblivious data structure defined in this paper. We have also implemented the ObliviCloudManager application that allows users to manage their cloud data by validating it before storing it in an oblivious data structure. Our application uses role-based access control model and collection based document management to store and retrieve data efficiently. Cloud consumers can use our system to define policies for storing data obliviously and manage storage on untrusted cloud platforms even if they are unfamiliar with the underlying technology and concepts of oblivious data structures.


Streamlining Management of Multiple Cloud Services

May 22nd, 2016

cloudhandshake

Aditi Gupta, Sudip Mittal, Karuna Pande Joshi, Claudia Pearce and Anupam Joshi, Streamlining Management of Multiple Cloud Services, IEEE International Conference on Cloud Computing, June 2016.

With the increase in the number of cloud services and service providers, manual analysis of Service Level Agreements (SLA), comparison between different service offerings and conformance regulation has become a difficult task for customers. Cloud SLAs are policy documents describing the legal agreement between cloud providers and customers. SLA specifies the commitment of availability, performance of services, penalties associated with violations and procedure for customers to receive compensations in case of service disruptions. The aim of our research is to develop technology solutions for automated cloud service management using Semantic Web and Text Mining techniques. In this paper we discuss in detail the challenges in automating cloud services management and present our preliminary work in extraction of knowledge from SLAs of different cloud services. We extracted two types of information from the SLA documents which can be useful for end users. First, the relationship between the service commitment and financial credit. We represented this information by enhancing the existing Cloud service ontology proposed by us in our previous research. Second, we extracted rules in the form of obligations and permissions from SLAs using modal and deontic logic formalizations. For our analysis, we considered six publicly available SLA documents from different cloud computing service providers.


Automatic Extraction of Metrics from SLAs for Cloud Service Management

May 7th, 2016

 

Sudip Mittal, Karuna Joshi, Claudia Pearce, and Anupam Joshi, Automatic Extraction of Metrics from SLAs for Cloud Service Management, IEEE International Conference on Cloud Engineering, 4 April 2016.

To effectively manage cloud based services, organizations need to continuously monitor the performance metrics listed in the Cloud service contracts. However, these legal documents, like Service Level Agreements (SLA) or privacy policy documents, are currently managed as plain text files meant principally for human consumption. Additionally, providers often define their own performance metrics for their services. These factors hinder the automation of SLA management and require manual effort to monitor the cloud service performance. We have significantly automated the process of extracting, managing and monitoring cloud SLA using natural language processing techniques and Semantic Web technologies. In this paper, we describe our technical approach and the ontology that we have developed to describe, manage, and reason about cloud SLAs. We also describe the prototype system that we have developed to automatically extract information from legal Terms of Service that are available on cloud provider websites.


Knowledge Extraction from Cloud Service Level Agreements

November 1st, 2015

Sudip Mittal, Karuna Pande Joshi, Claudia Pearce, and Anupam Joshi, Parallelizing Natural Language Techniques for Knowledge Extraction from Cloud Service Level Agreements, IEEE International Conference on Big Data, October, 2015.

To efficiently utilize their cloud based services, consumers have to continuously monitor and manage the Service Level Agreements (SLA) that define the service performance measures. Currently this is still a time and labor intensive process since the SLAs are primarily stored as text documents. We have significantly automated the process of extracting, managing and monitoring cloud SLAs using natural language processing techniques and Semantic Web technologies. In this paper we describe our prototype system that uses a Hadoop cluster to extract knowledge from unstructured legal text documents. For this prototype we have considered publicly available SLA/terms of service documents of various cloud providers. We use established natural language processing techniques in parallel to speed up cloud legal knowledge base creation. Our system considerably speeds up knowledge base creation and can also be used in other domains that have unstructured data.


Public tutorials on high performance computing research and technologies

December 13th, 2012

 

The Center for Hybrid Multicore Productivity Research is a collaborative research center sponsored by the National Science Foundation with two university partners (UMBC and University of California San Diego), six government, and seven industry members. The Center's research is focused on addressing productivity, performance, and scalability issues in meeting the insatiable computational demands of its members' applications through the continuous evolution of multicore architectures and open source tools.

As part of its annual industrial advisory board meeting next week, the center will hold an afternoon of public tutorials from 1:00pm to 4:00pm on Monday, 17 December 2012 in room 456 of the ITE building at UMBC. The tutorials will be presented by students doing research sponsored by the Center and feature some of the underlying technologies being used and some of their applications. The tutorials are:

  • GPGPUs – Tim Blattner and Fahad Zafa
  • Cloud Policies – Karuna Joshi
  • Human Sensors Networks – Oleg Aulov
  • Machine Learning Disaster Warnings – Han Dong
  • Graph 500 – Tyler Simon
  • HBase – Phuong Nyguen

The tutorial talks are free and open to the public. If you plan to attend, please RSVP by email to Dr. Valerie L. Thomas, valeriet@umbc.edu.


NIST Big Data Workshop, 13-14 June 2012

May 31st, 2012

NIST will hold a Big Data Workshop 13-14 June 2012 in Gaithersburg to explore key national priority topics in support of the White House Big Data Initiative. The workshop is being held in collaboration with the NSF sponsored Center for Hybrid Multicore Productivity Research, a collaboration between UMBC, Georgia Tech and UCSD.

This first workshop will discuss examples from science, health, disaster management, security, and finance as well as topics in emerging technology areas, including analytics and architectures. Two issues of special interest are identifying the core technologies needed to collect, store, preserve, manage, analyze, and share big data that could be standardized and developing measurements to ensure the accuracy and robustness of big data methods.

The workshop format will be a mixture of sessions, panels, and posters. Session speakers and panel members are by invitation only but all interested parties are encouraged to submit extended abstracts and/or posters.

The workshop is being held at NIST’s Gaithersburg facility and is free, although online pre-registration is required. A preliminary agenda is available which is subject to change as the workshop date approaches.


Make mincemeat out of MapReduce with Python

October 1st, 2011

mincemeat.py is a super-lightweight, open source Python implementation of the popular MapReduce distributed computing framework that only depend on the Python Standard Library.

Just install the single source file on a set of machines and invoke the script on them with a password (for authentication) and the IP address of the host and your workers are good to go. Then, using the same package, run simple server program that defines map, reduce and your data source.

While it’s only 350 lines of Python, the package looks great for teaching or experimenting with the MapReduce concept as well as being potentially useful if you work in Python.


CloudCamp Baltimore, 6-10pm Wed Mar 9, 2011

February 24th, 2011

There will be a free CloudCamp meeting in Baltimore from 6:000pm to 10:00pm Wednesday March 9th at the Baltimore Marriott Waterfront. Cloudcamps are participants-driven unconferences where users of Cloud Computing technologies meet to network and share ideas, experiences, challenges and solutions. The event is free but participants are asked to register to ensure there is enough food and refreshments.

CloudCampHere is the current, tentative schedule:

6:00pm – Registration & Networking (food/drink)
6:30pm – Opening Introductions
6:45pm – Lightning Talks (5 minutes each)
7:30pm – Unpanel
8:00pm – Organize Unconference
8:15pm – Unconference Breakout Session Round 1
9:00pm – Unconference Breakout Session Round 2
9:45pm – Wrap-up
10:00pm – Find somewhere for post-event networking

Contact the organizers if you are interested in giving a five minute lightning talk or lead breakout session.


UMBC hosts Frontiers of Multi-Core Computing Workshop

September 11th, 2010

UMBC’s Multicore Computational Center will host the Second Workshop on Frontiers of Multi-Core Computing on 22-23 September 2010. The workshop will involve a wide range of people from universities, industry and government who will exchange ideas, discuss issues, and develop the strategies for coping with the challenges of parallel and multicore computing.

“Multi- (e.g., Intel Westmere and IBM Power7) and many-core (e.g., NVIDIA Tesla and AMD FireStream GPUs) microprocessors are enabling more compute- and data-intensive computation in desktop computers, clusters, and leadership supercomputers. However efficient utilization of these microprocessors is still a very challenging issue. Their differing architectures require significantly different programming paradigms when adapting real-world applications. The actual porting costs are actively debated, as well as the relative performance between GPUs and CPUs.”

The workshop is free but those interested should register online. See the workshop schedule for details on presentations and timing.


UMBC Multicore Computational Center

June 15th, 2009

Joab Jackson (UMBC ’90) wrote a nice article on UMBC’s Multicore Computational Center for the current issue of UMBC Magazine. From The Power of Parallels:

“In July 2007, IBM gave UMBC computer science professors Milton Halem and Yelena Yesha a grant to launch the center with cash and equipment that have totaled more than $1 million over the past three years. Supporting funding from NASA also helped the effort.

    “Not only are we ahead of the curve,” says Charles Nicholas, chair of the department of computer science and electrical engineering, “but we hope to stay ahead of the curve…. The partnerships with IBM will let us keep the technologies up to date.”

Halem says that government and private enterprise are in dire need of “trained graduate students who know how to apply the new methods of parallel programming to the problems they face,” Halem says. “We’re one of the few schools in the nation that is teaching these courses.”


Tutorial: Hadoop on Windows with Eclipse

April 9th, 2009

Hadoop has become one of the most popular frameworks to exploit parallelism on a computing cluster. You don’t actually need access to a cluster to try Hadoop, learn how to use it, and develop code to solve your own problems.

UMBC Ph.D student Vlad Korolev has written an excellent tutorial, Hadoop on Windows with Eclipse, showing how to install and use Hadoop on a single computer running Microsoft Windows. It also covers the Eclipse Hadoop plugin, which enables you to create and run Hadoop projects from Eclipse. In addition to step by step instructions, the tutorial has short videos documenting the process.

If you want to explore Hadoop and are comfortable developing Java programs in Eclipse on a Windows box, this tutorial will get you going. Once you have mastered Hadoop and had developed your first project using it, you can go about finding a cluster to run it on.