UMBC ebiquity
UMBC eBiquity Blog

talk: Penetration Testing a Simulated Automotive Ethernet Environment

Tim Finin, 11:06am 15 October 2017

Penetration Testing a Simulated Automotive Ethernet Environment

Kenneth Truex

11:00am Monday, 9 October 2017, ITE 346

The capabilities of modern day automobiles have far exceeded what Robert Bosch GmbH could have imagined when it proposed the Controller Area Network (CAN) bus back in 1986. Over time, drivers wanted more functionality, comfort, and safety in their automobiles creating a burden for automotive manufacturers. With these driver demands came many innovations to the in-vehicle network core protocol. Modern automobiles that have a video based infotainment system or any type of camera assisted functionality such as an Advanced Driver Assistance System (ADAS) use ethernet as their network backbone. This is because the original CAN specification only allowed for up to eight bytes of data per message on a bus rated at 1 Mbps. This is far less than the requirements of more advanced video-based automotive systems. The ethernet protocol allows for 1500 bytes of data per packet on a network rated for up to 100 Mbps. This led the automotive industry to adopt ethernet as the core protocol, overcoming most of the limitations posed by the CAN protocol. By adopting ethernet as the protocol for automotive networks, certain attack vectors are now available for black hat hackers to exploit in order to put the vehicle in an unsafe condition. This thesis will create a simulated automotive ethernet environment using the CANoe network simulation platform created by Vector. Then, a penetration test will be conducted on the simulated environment in order to discover attacks that pose a threat to automotive ethernet networks. These attacks will be from the perspective of an attacker will full access to the vehicle under test, and will cover all three sides of the Confidentiality, Integrity, Availability (CIA) triad. In conclusion, this thesis will propose several ethernet specific defense mechanisms that can be implemented in an automotive taxonomy to reduce the attack surface and allow for a safer end user experience.


 

talk: K. Mayes on Attacks on Smart Cards, RFIDs and Embedded System, 10am 10/10

Tim Finin, 7:53pm 8 October 2017

Attacks on Smart Cards, RFIDs and Embedded Systems

Prof. Keith Mayes
Royal Holloway University of London

10-11:00am Tuesday, 10 October 2017, ITE 325, UMBC

Smart Cards and RFIDs exist with a range of capabilities and are used in their billions throughout the world. The simpler devices have poor security, however, for many years, high-end smart cards have successfully been used in a range of systems such as banking, passports, mobile communication, satellite TV etc. Fundamental to their success is a specialist design to offer remarkable resistance to a wide range of attacks, including physical, side-channel and fault. This talk describes a range of known attacks and the countermeasures that are employed to defeat them.

Prof. Keith Mayes is the Head of the School of Mathematics and Information Security at Royal Holloway University of London. He received his BSc (Hons) in Electronic Engineering in 1983 from the University of Bath, and his PhD degree in Digital Image Processing in 1987. He is an active researcher/author with 100+ publications in numerous conferences, books and journals. His interests include the design of secure protocols, communications architectures and security tokens as well as associated attacks/countermeasures. He is a Fellow of the Institution of Engineering and Technology, a Founder Associate Member of the Institute of Information Security Professionals, a Member of the Licensing Executives Society and a member of the editorial board of the Journal of Theoretical and Applied Electronic Commerce Research (JTAER).


 

talk: Automated Knowledge Extraction from the Federal Acquisition Regulations System

Tim Finin, 4:26pm 23 September 2017

In this week’s meeting, Srishty Saha, Michael Aebig and Jiayong Lin will talk about their work on extracting knowledge from the US FAR System.

Automated Knowledge Extraction from the Federal Acquisition Regulations System

Srishty Saha, Michael Aebig and Jiayong Lin

11am-12pm Monday, 25 September 2017, ITE346, UMBC

The Federal Acquisition Regulations System (FARS) within the Code of Federal Regulations (CFR) includes facts and rules for individuals and organizations seeking to do business with the US Federal government. Parsing and extracting knowledge from such lengthy regulation documents is currently done manually and is time and human intensive. Hence, developing a cognitive assistant for automated analysis of such legal documents has become a necessity. We are developing a semantically rich legal knowledge base representing legal entities and their relationships, semantically similar terminologies, deontic expressions and cross-referenced legal facts and rules.


 

2018 Ontology Summit: Ontologies in Context

Tim Finin, 5:55pm 12 September 2017

2018 Ontology Summit: Ontologies in Context

The OntologySummit is an annual series of online and in-person events that involves the ontology community and communities related to each year’s topic. The topic chosen for the 2018 Ontology Summit will be Ontologies in Context, which the summit describes as follows.

“In general, a context is defined to be the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed. Some examples of synonyms include circumstances, conditions, factors, state of affairs, situation, background, scene, setting, and frame of reference. There are many meanings of “context” in general, and also for ontologies in particular. The summit this year will survey these meanings and identify the research problems that must be solved so that contexts can succeed in achieving the full understanding and assessment of an ontology.”

Each year’s Summit comprises of a series of both online and face-to-face events that span about three months. These include a vigorous three-month online discourse on the theme, and online panel discussions, research activities which will culminate in a two-day face-to-face workshop and symposium.

Over the next two months, there will be a sequence of weekly online meetings to discuss, plan and develop the 2018 topic. The summit itself will start in January with weekly online sessions of invited speakers. Visit the the 2018 Ontology Summit site for more information and to see how you can participate in the planning sessions.


 

Dissertation: Context-Dependent Privacy and Security Management on Mobile Devices

Tim Finin, 10:20am 10 September 2017

Context-Dependent Privacy and Security Management on Mobile Devices

Prajit Kumar Das, Context-Dependent Privacy and Security Management on Mobile Devices, Ph.D. Dissertation, University of Maryland, Baltimore County, September 2017.

There are ongoing security and privacy concerns regarding mobile platforms that are being used by a growing number of citizens. Security and privacy models typically used by mobile platforms use one-time permission acquisition mechanisms. However, modifying access rights after initial authorization in mobile systems is often too tedious and complicated for users. User studies show that a typical user does not understand permissions requested by applications or are too eager to use the applications to care to understand the permission implications. For example, the Brightest Flashlight application was reported to have logged precise locations and unique user identifiers, which have nothing to do with a flashlight application’s intended functionality, but more than 50 million users used a version of this application which would have forced them to allow this permission. Given the penetration of mobile devices into our lives, a fine-grained context-dependent security and privacy control approach needs to be created.

We have created Mithril as an end-to-end mobile access control framework that allows us to capture access control needs for specific users, by observing violations of known policies. The framework studies mobile application executables to better inform users of the risks associated with using certain applications. The policy capture process involves an iterative user feedback process that captures policy modifications required to mediate observed violations. Precision of policy is used to determine convergence of the policy capture process. Policy rules in the system are written using Semantic Web technologies and the Platys ontology to define a hierarchical notion of context. Policy rule antecedents are comprised of context elements derived using the Platys ontology employing a query engine, an inference mechanism and mobile sensors. We performed a user study that proves the feasibility of using our violation driven policy capture process to gather user-specific policy modifications.

We contribute to the static and dynamic study of mobile applications by defining “application behavior” as a possible way of understanding mobile applications and creating access control policies for them. Our user study also shows that unlike our behavior-based policy, a “deny by default” mechanism hampers usability of access control systems. We also show that inclusion of crowd-sourced policies leads to further reduction in user burden and need for engagement while capturing context-based access control policy. We enrich knowledge about mobile “application behavior” and expose this knowledge through the Mobipedia knowledge-base. We also extend context synthesis for semantic presence detection on mobile devices by combining Bluetooth, low energy beacons and Nearby Messaging services from Google.


 

New paper: Cognitive Assistance for Automating the Analysis of the Federal Acquisition Regulations System

Tim Finin, 8:25am 5 September 2017

Cognitive Assistance for Automating the Analysis of the Federal Acquisition Regulations System

Srishty Saha and Karuna Pande Joshi, Cognitive Assistance for Automating the Analysis of the Federal Acquisition Regulations System, AAAI Fall Symposium on Cognitive Assistance in Government and Public Sector Applications, AAAI Press, November 2017

Government regulations are critical to understanding how to do business with a government entity and receive other bene?ts. However, government regulations are also notoriously long and organized in ways that can be confusing for novice users. Developing cognitive assistance tools that remove some of the burden from human users is of potential bene?t to a variety of users. The volume of data found in United States federal government regulation suggests a multiple-step approach to process the data into machine readable text, create an automated legal knowledge base capturing various facts and rules, and eventually building a legal question and answer system to acquire understanding from various regulations and provisions. Our work discussed in this paper represents our initial efforts to build a framework for Federal Acquisition Regulations System (Title 48, Code of Federal Regulations) in order to create an efficient legal knowledge base representing relationships between various legal elements, semantically similar terminologies, deontic expressions and cross-referenced legal facts and rules.


 

New paper: Generating Digital Twin models using Knowledge Graphs for Industrial Production Lines

Tim Finin, 9:12pm 2 September 2017

Generating Digital Twin models using Knowledge Graphs for Industrial Production Lines

Agniva Banerjee, Raka Dalal, Sudip Mittal and Karuna Pande Joshi, Generating Digital Twin models using Knowledge Graphs for Industrial Production Lines, Workshop on Industrial Knowledge Graphs, co-located with the 9th International ACM Web Science Conference, 2017.

Digital Twin models are computerized clones of physical assets that can be used for in-depth analysis. Industrial production lines tend to have multiple sensors to generate near real-time status information for production. Industrial Internet of Things datasets are difficult to analyze and infer valuable insights such as points of failure, estimated overhead. etc. In this paper we introduce a simple way of formalizing knowledge as digital twin models coming from sensors in industrial production lines. We present a way on to extract and infer knowledge from large scale production line data, and enhance manufacturing process management with reasoning capabilities, by introducing a semantic query mechanism. Our system primarily utilizes a graph-based query language equivalent to conjunctive queries and has been enriched with inference rules.


 

PhD defense: Prajit Das, Context-dependent privacy and security management on mobile devices

Tim Finin, 6:22pm 17 August 2017

Ph.D. Dissertation Defense

Context-dependent privacy and security management on mobile devices

Prajit Kumar Das

8:00-11:00am Tuesday, 22 August 2017, ITE325b, UMBC

There are ongoing security and privacy concerns regarding mobile platforms which are being used by a growing number of citizens. Security and privacy models typically used by mobile platforms use one-time permission acquisition mechanisms. However, modifying access rights after initial authorization in mobile systems is often too tedious and complicated for users. User studies show that a typical user does not understand permissions requested by applications or are too eager to use the applications to care to understand the permission implications. For example, the Brightest Flashlight application was reported to have logged precise locations and unique user identifiers, which have nothing to do with a flashlight application’s intended functionality, but more than 50 million users used a version of this application which would have forced them to allow this permission. Given the penetration of mobile devices into our lives, a fine-grained context-dependent security and privacy control approach needs to be created.

We have created Mithril as an end-to-end mobile access control framework that allows us to capture access control needs for specific users, by observing violations of known policies. The framework studies mobile application executables to better inform users of the risks associated with using certain applications. The policy capture process involves an iterative user feedback process that captures policy modifications required to mediate observed violations. Precision of policy is used to determine convergence of the policy capture process. Policy rules in the system are written using Semantic Web technologies and the Platys ontology to define a hierarchical notion of context. Policy rule antecedents are comprised of context elements derived using the Platys ontology employing a query engine, an inference mechanism and mobile sensors. We performed a user study that proves the feasibility of using our violation driven policy capture process to gather user-specific policy modifications.

We contribute to the static and dynamic study of mobile applications by defining “application behavior” as a possible way of understanding mobile applications and creating access control policies for them. Our user study also shows that unlike our behavior-based policy, a “deny by default” mechanism hampers usability of access control systems. We also show that inclusion of crowd-sourced policies leads to further reduction in user burden and need for engagement while capturing context-based access control policy. We enrich knowledge about mobile “application behavior” and expose this knowledge through the Mobipedia knowledge-base. We also extend context synthesis for semantic presence detection on mobile devices by combining Bluetooth, low energy beacons and Nearby Messaging services from Google.

Committee: Drs. Anupam Joshi (chair), Tim Finin (co-chair), Tim Oates, Nilanjan Banerjee, Arkady Zaslavsky, (CSIRO), Dipanjan Chakraborty (Shopperts)


 

PhD defense: Deep Representation of Lyrical Style and Semantics for Music Recommendation

Tim Finin, 8:38pm 16 July 2017

Dissertation Defense

Deep Representation of Lyrical Style and Semantics for Music Recommendation

Abhay L. Kashyap

11:00-1:00 Thursday, 20 July 2017, ITE 346

In the age of music streaming, the need for effective recommendations is important for music discovery and a personalized user experience. Collaborative filtering based recommenders suffer from popularity bias and cold-start which is commonly mitigated by content features. For music, research in content based methods have mainly been focused in the acoustic domain while lyrical content has received little attention. Lyrics contain information about a song’s topic and sentiment that cannot be easily extracted from the audio. This is especially important for lyrics-centric genres like Rap, which was the most streamed genre in 2016. The goal of this dissertation is to explore and evaluate different lyrical content features that could be useful for content, context and emotion based models for music recommendation systems.

With Rap as the primary use case, this dissertation focuses on featurizing two main aspects of lyrics; its artistic style of composition and its semantic content. For lyrical style, a suite of high level rhyme density features are extracted in addition to literary features like the use of figurative language, profanity and vocabulary strength. In contrast to these engineered features, Convolutional Neural Networks (CNN) are used to automatically learn rhyme patterns and other relevant features. For semantics, lyrics are represented using both traditional IR techniques and the more recent neural embedding methods.

These lyrical features are evaluated for artist identification and compared with artist and song similarity measures from a real-world collaborative filtering based recommendation system from Last.fm. It is shown that both rhyme and literary features serve as strong indicators to characterize artists with feature learning methods like CNNs achieving comparable results. For artist and song similarity, a strong relationship was observed between these features and the way users consume music while neural embedding methods significantly outperformed LSA. Finally, this work is accompanied by a web-application, Rapalytics.com, that is dedicated to visualizing all these lyrical features and has been featured on a number of media outlets, most notably, Vox, attn: and Metro.

Committee: Drs. Tim Finin (chair), Anupam Joshi, Tim Oates, Cynthia Matuszek and Pranam Kolari (Walmart Labs)


 

PhD Proposal: Analysis of Irregular Event Sequences using Deep Learning, Reinforcement Learning, and Visualization

Tim Finin, 9:55pm 12 July 2017

Analysis of Irregular Event Sequences using Deep Learning, Reinforcement Learning, and Visualization

Filip Dabek

11:00-1:00 Thursday 13 July 2017, ITE 346, UMBC

History is nothing but a catalogued series of events organized into data. Amazon, the largest online retailer in the world, processes over 2,000 orders per minute. Orders come from customers on a recurring basis through subscriptions or as one-off spontaneous purchases, resulting in each customer exhibiting their own behavioral pattern when it comes to the way in which they place orders throughout the year. For a company such as Amazon, that generates over $130 billion of revenue each year, understanding and uncovering the hidden patterns and trends within this data is paramount in improving the efficiency of their infrastructure ranging from the management of the inventory within their warehouses, distribution of their labor force, and preparation of their online systems for the load of users. With the ever increasingly availability of big data, problems such as these are no longer limited to large corporations but are experienced across a wide range of domains and faced by analysts and researchers each and every day.

While many event analysis and time series tools have been developed for the purpose of analyzing such datasets, most approaches tend to target clean and evenly spaced data. When faced with noisy or irregular data, it has been recommended to undergo a pre-processing step of converting and transforming the data into being regular. This transformation technique arguably interferes on a fundamental level as to how the data is represented, and may irrevocably bias the way in which results are obtained. Therefore, operating on raw data, in its noisy natural form, is necessary to ensure that the insights gathered through analysis are accurate and valid.

In this dissertation novel approaches are presented for analyzing irregular event sequences using a variety of techniques ranging from deep learning, reinforcement learning, and visualization. We show how common tasks in event analysis can be performed directly on an irregular event dataset without requiring a transformation that alters the natural representation of the process that the data was captured from. The three tasks that we showcase include: (i) summarization of large event datasets, (ii) modeling the processes that create events, and (iii) predicting future events that will occur.

Committee: Drs. Tim Oates (Chair), Jesus Caban, Penny Rheingans, Jian Chen, Tim Finin

 


 

Jennifer Sleeman dissertation defense: Dynamic Data Assimilation for Topic Modeling

Tim Finin, 2:54pm 27 June 2017

Ph.D. Dissertation Defense

Dynamic Data Assimilation for Topic Modeling

Jennifer Sleeman
9:00am Thursday, 29 June 2017, ITE 325b, UMBC

Understanding how a particular discipline such as climate science evolves over time has received renewed interest. By understanding this evolution, predicting the future direction of that discipline becomes more achievable. Dynamic Topic Modeling (DTM) has been applied to a number of disciplines to model topic evolution as a means to learn how a particular scientific discipline and its underlying concepts are changing. Understanding how a discipline evolves, and its internal and external influences, can be complicated by how the information retrieved over time is integrated. There are different techniques used to integrate sources of information, however, less research has been dedicated to understanding how to integrate these sources over time. The method of data assimilation is commonly used in a number of scientific disciplines to both understand and make predictions of various phenomena, using numerical models and assimilated observational data over time.

In this dissertation, I introduce a novel algorithm for scientific data assimilation, called Dynamic Data Assimilation for Topic Modeling (DDATM), which uses a new cross-domain divergence method (CDDM) and DTM. By using DDATM, observational data in the form of full-text research papers can be assimilated over time starting from an initial model. DDATM can be used as a way to integrate data from multiple sources and, due to its robustness, can exploit the assimilating observational information to better tolerate missing model information. When compared with a DTM model, the assimilated model is shown to have better performance using standard topic modeling measures, including perplexity and topic coherence. The DDATM method is suitable for prediction and results in higher likelihood for subsequent documents. DDATM is able to overcome missing information during the assimilation process when compared with a DTM model. CDDM generalizes as a method that can also bring together multiple disciplines into one cohesive model enabling the identification of related concepts and documents across disciplines and time periods. Finally, grounding the topic modeling process with an ontology improves the quality of the topics and enables a more granular understanding of concept relatedness and cross-domain influence.

The results of this dissertation are demonstrated and evaluated by applying DDATM to 30 years of reports from the Intergovernmental Panel on Climate Change (IPCC) along with more than 150,000 documents that they cite to show the evolution of the physical basis of climate change.

Committee Members: Drs. Tim Finin (co-advisor), Milton Halem (co-advisor), Anupam Joshi, Tim Oates, Cynthia Matuszek, Mark Cane, Rafael Alonso


 

UMBC Data Science Graduate Program Starts Fall 2017

Tim Finin, 9:15pm 16 June 2017

 

UMBC Data Science Graduate Programs

UMBC’s Data Science Master’s program prepares students from a wide range of disciplinary backgrounds for careers in data science. In the core courses, students will gain a thorough understanding of data science through classes that highlight machine learning, data analysis, data management, ethical and legal considerations, and more.

Students will develop an in-depth understanding of the basic computing principles behind data science, to include, but not limited to, data ingestion, curation and cleaning and the 4Vs of data science: Volume, Variety, Velocity, Veracity, as well as the implicit 5th V — Value. Through applying principles of data science to the analysis of problems within specific domains expressed through the program pathways, students will gain practical, real world industry relevant experience.

The MPS in Data Science is an industry-recognized credential and the program prepares students with the technical and management skills that they need to succeed in the workplace.

For more information and to apply online, see the Data Science MPS site.