November 29th, 2016
Dealing with Dubious Facts
in Knowledge Graphs
1:00-3:00pm Wednesday, 30 November 2016, ITE 325b, UMBC
Knowledge graphs are structured representations of facts where nodes are real-world entities or events and edges are the associations among the pair of entities. Knowledge graphs can be constructed using automatic or manual techniques. Manual techniques construct high quality knowledge graphs but are expensive, time consuming and not scalable. Hence, automatic information extraction techniques are used to create scalable knowledge graphs but the extracted information can be of poor quality due to the presence of dubious facts.
An extracted fact is dubious if it is incorrect, inexact or correct but lacks evidence. A fact might be dubious because of the errors made by NLP extraction techniques, improper design consideration of the internal components of the system, choice of learning techniques (semi-supervised or unsupervised), relatively poor quality of heuristics or the syntactic complexity of underlying text. A preliminary analysis of several knowledge extraction systems (CMU’s NELL and JHU’s KELVIN) and observations from the literature suggest that dubious facts can be identified, diagnosed and managed. In this dissertation, I will explore approaches to identify and repair such dubious facts from a knowledge graph using several complementary approaches, including linguistic analysis, common sense reasoning, and entity linking.
Committee: Drs. Tim Finin (Chair), Anupam Joshi, Tim Oates, Paul McNamee (JHU), Partha Talukdar (IISc, India)
November 28th, 2016
In this week’s ebiquity meeting, Muhammad Mahbubur Rahman will about about his work on understanding large documents, such as business RFPs.
Large Document Understanding
Up-to-the-minute language understanding approaches are mostly focused on small documents such as newswire articles, blog posts, product reviews and discussion forum entries. Understanding and extracting information from large documents such as legal documents, reports, business opportunities, proposals and technical manuals is still a challenging task. The reason behind this challenge is that the documents may be multi-themed, complex and cover diverse topics.
We aim to automatically identify and classify a document’s sections and subsections, infer their structure and annotate them with semantic labels to understand the semantic structure of a document. This document’s structure understanding will significantly benefit and inform a variety of applications such as information extraction and retrieval, document categorization and clustering, document summarization, fact and relation extraction, text analysis and question answering.
November 26th, 2016
Cognitive Analytics Framework to Secure Internet of Things
1:00-3:30pm, Monday, 28 November 2016, ITE 325b
Recent years have seen the rapid growth and widespread adoption of Internet of Things in a wide range of domains including smart homes, healthcare, automotive, smart farming and smart grids. The IoT ecosystem consists of devices like sensors, actuators and control systems connected over heterogeneous networks. The connected devices can be from different vendors with different capabilities in terms of power requirements, processing capabilities, etc. As such, many security features aren’t implemented on devices with lesser processing capabilities. The level of security practices followed during their development can also be different. Lack of over the air update for firmware also pose a very big security threat considering their long-term deployment requirements. Device malfunctioning is yet another threat which should be considered. Hence, it is imperative to have an external entity which monitors the ecosystem and detect attacks and anomalies.
In this thesis, we propose a security framework for IoTs using cognitive techniques. While anomaly detection has been employed in various domains, some challenges like online approach, resource constraints, heterogeneity, distributed data collection etc. are unique to IoTs and their predecessors like wireless sensor networks. Our framework will have an underlying knowledge base which has the domain-specific information, a hybrid context generation module which generates complex contexts and a fast reasoning engine which does logical reasoning to detect anomalous activities. When raw sensor data arrives, the hybrid context generation module queries the knowledge base and generates different simple local contexts using various statistical and machine learning models. The inferencing engine will then infer global complex contexts and detects anomalous activities using knowledge from streaming facts and and domain specific rules encoded in the Ontology we will create. We will evaluate our techniques by realizing and validating them in the vehicular domain.
Committee: Drs. Dr. Anupam Joshi (Chair), Dr. Tim Finin, Dr. Nilanjan Banerjee, Dr. Yelena Yesha, Dr. Wenjia Li, NYIT, Dr. Filip Perich, Google
November 22nd, 2016
In this week’s meeting, Ankur Padia will about about his work on the problem of identifying and managing ‘dubious facts’ extracted from text and added to a knowledge graph.
Dealing with Dubious Facts in Knowledge Graphs
Knowledge graphs are used to represent real-world facts and events with entities as nodes and relations as labeled edges. Generally, a knowledge graph is automatically constructed by extracting facts from text corpus using information extraction (IE) techniques. Such IE techniques are scalable but often extract low quality (or dubious) facts due to errors caused by NLP libraries, internal components of an extraction system, choice of learning techniques, heuristics and syntactic complexity of underlying text. We wish to explore techniques to process such dubious facts and improve the quality of a knowledge graph.
November 19th, 2016
A common Big Data problem is the need to integrate large temporal data sets from various data sources into one comprehensive structure. Having the ability to correlate evolving facts between data sources can be especially useful in supporting a number of desired application functions such as inference and influence identification. As a real world application we use climate change publications based on the Intergovernmental Panel on Climate Change, which publishes climate change assessment reports every five years, with currently over 25 years of published content. Often these reports reference thousands of research papers. We use dynamic topic modeling as a basis for combining report and citation domains into one structure. We are able to correlate documents between the two domains to understand how the research has influenced the reports and how this influence has changed over time. In this use case, the topic report model used a total number of 410 documents and 5911 terms in the vocabulary while in the topic citations the vocabulary consisted of 25,154 terms and the number of documents was closer to 200,000 research papers.
November 8th, 2016
In this week’s ebiquity meeting (11:30 8 Nov. 2016) Prajit Das will present his work on capturing policies for fine-grained access control on mobile devices.
As of 2016, there are more mobile devices than humans on earth. Today, mobile devices are a critical part of our lives and often hold sensitive corporate and personal data. As a result, they are a lucrative target for attackers, and managing data privacy and security on mobile devices has become a vital issue. Existing access control mechanisms in most devices are restrictive and inadequate. They do not take into account the context of a device and its user when making decisions. In many cases, the access granted to a subject should change based on context of a device. Such fine-grained, context-sensitive access control policies have to be personalized too. In this paper, we present the Mithril system, that uses policies represented in Semantic Web technologies and captured using user feedback, to handle access control on mobile devices. We present an iterative feedback process to capture user specific policy. We also present a policy violation metric that allows us to decide when the capture process is complete.
November 6th, 2016
Multi-relational data, like knowledge graphs, are generated from multiple data sources by extracting entities and their relationships. We often want to include inferred, implicit or likely relationships that are not explicitly stated, which can be viewed as link-prediction in a graph. Tensor decomposition models have been shown to produce state-of-the-art results in link-prediction tasks. We describe a simple but novel extension to an existing tensor decomposition model to predict missing links using similarity among tensor slices, as opposed to an existing tensor decomposition models which assumes each slice to contribute equally in predicting links. Our extended model performs better than the original tensor decomposition and the non-negative tensor decomposition variant of it in an evaluation on several datasets.