paper: Ontology-Grounded Topic Modeling for Climate Science Research

July 24th, 2018

 

Ontology-Grounded Topic Modeling for Climate Science Research

 

Jennifer Sleeman, Milton Halem and Tim Finin, Ontology-Grounded Topic Modeling for Climate Science Research, Semantic Web for Social Good Workshop, Int. Semantic Web Conf., Monterey, Oct. 2018. (Selected as best paper), to appear, Emerging Topics in Semantic Technologies, E. Demidova, A.J. Zaveri, E. Simperl (Eds.), AKA Verlag Berlin, 2018.

 

In scientific disciplines where research findings have a strong impact on society, reducing the amount of time it takes to understand, synthesize and exploit the research is invaluable. Topic modeling is an effective technique for summarizing a collection of documents to find the main themes among them and to classify other documents that have a similar mixture of co-occurring words. We show how grounding a topic model with an ontology, extracted from a glossary of important domain phrases, improves the topics generated and makes them easier to understand. We apply and evaluate this method to the climate science domain. The result improves the topics generated and supports faster research understanding, discovery of social networks among researchers, and automatic ontology generation.


paper: Understanding and representing the semantics of large structured documents

July 23rd, 2018

Understanding and representing the semantics of large structured documents

 

Muhammad Mahbubur Rahman and Tim Finin, Understanding and representing the semantics of large structured documents, Proceedings of the 4th Workshop on Semantic Deep Learning (SemDeep-4, ISWC), 8 October 2018.

 

Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arXiv and Request for Proposal documents.


MS defense: Open Information Extraction for Code-Mixed Hindi-English Social Media Data

July 1st, 2018

MS Thesis Defense

Open Information Extraction for Code-Mixed Hindi-English Social Media Data

Mayur Pate

1:00pm Monday, 2 July 2018, ITE 325b, UMBC

Open domain relation extraction (Angeli, Premkumar, & Manning 2015) is a process of finding relation triples. While there are a number of available systems for open information extraction (Open IE) for a single language, traditional Open IE systems are not well suited to content that contains multiple languages in a single utterance. In this thesis, we have extended a existing code mix corpus (Das, Jamatia, & Gambck 2015) by finding and annotating relation triples in Open IE fashion. Using this newly annotated corpus, we have experimented with seq2seq neural network (Zhang, Duh, & Durme 2017) for finding the relationship triples. As prerequisite for relationship extraction pipeline, we have developed part-of-speech tagger and named entity and predicate recognizer for code-mix content. We have experimented with various approaches such as Conditional Random Fields (CRF), Average Perceptron and deep neural networks. According to our knowledge, this relationship extraction system is first ever contribution for any codemix natural language. We have achieved promising results for all of the components and it could be improved in future with more codemix data.

Committee: Drs. Frank Ferraro (Chair), Tim Finin, Hamed Pirsiavash, Bryan Wilkinson


PhD defense: Understanding the Logical and Semantic Structure of Large Documents

May 29th, 2018

Dissertation Defense

Understanding the Logical and Semantic Structure of Large Documents

Muhammad Mahbubur Rahman

11:00am Wednesday, 30 May 2018, ITE 325b

Understanding and extracting of information from large documents, such as business opportunities, academic articles, medical documents and technical reports poses challenges not present in short documents. The reasons behind this challenge are that large documents may be multi-themed, complex, noisy and cover diverse topics. This dissertation describes a framework that can analyze large documents, and help people and computer systems locate desired information in them. It aims to automatically identify and classify different sections of documents and understand their purpose within the document. A key contribution of this research is modeling and extracting the logical and semantic structure of electronic documents using deep learning techniques. The effectiveness and robustness of ?the framework is evaluated through extensive experiments on arXiv and requests for proposals datasets.

Committee Members: Drs. Tim Finin (Chair), Anupam Joshi, Tim Oates, Cynthia Matuszek, James Mayfield (JHU)


Preventing Poisoning Attacks on Threat Intelligence Systems

April 22nd, 2018

Preventing Poisoning Attacks on Threat Intelligence Systems

Nitika Khurana, Graduate Student, UMBC

11:00-12:00 Monday, 23 April 2018, ITE346, UMBC

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We also maintain the provenance of the output generated to ensure information and source reliability and identify the background data that caused an attack. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.


UMBC at SemEval-2018 Task 8: Understanding Text about Malware

April 21st, 2018

UMBC at SemEval-2018 Task 8: Understanding Text about Malware

UMBC at SemEval-2018 Task 8: Understanding Text about Malware

Ankur Padia, Arpita Roy, Taneeya Satyapanich, Francis Ferraro, Shimei Pan, Anupam Joshi and Tim Finin, UMBC at SemEval-2018 Task 8: Understanding Text about Malware, Int. Workshop on Semantic Evaluation (collocated with NAACL-HLT), New Orleans, LA, June 2018.

 

We describe the systems developed by the UMBC team for 2018 SemEval Task 8, SecureNLP (Semantic Extraction from CybersecUrity REports using Natural Language Processing). We participated in three of the sub-tasks: (1) classifying sentences as being relevant or irrelevant to malware, (2) predicting token labels for sentences, and (4) predicting attribute labels from the Malware Attribute Enumeration and Characterization vocabulary for defining malware characteristics. We achieved F1 scores of 50.34/18.0 (dev/test), 22.23 (test-data), and 31.98 (test-data) for Task1, Task2 and Task2 respectively. We also make our cybersecurity embeddings publicly available at https://bit.ly/cybr2vec.


Cognitively Rich Framework to Automate Extraction & Representation of Legal Knowledge

April 15th, 2018

Cognitively Rich Framework to Automate Extraction and Representation of Legal Knowledge

Srishty Saha, UMBC
11-12 Monday, 16 April 2018, ITE 346

With the explosive growth in cloud-based services, businesses are increasingly maintaining large datasets containing information about their consumers to provide a seamless user experience. To ensure privacy and security of these datasets, regulatory bodies have specified rules and compliance policies that must be adhered to by organizations. These regulatory policies are currently available as text documents that are not machine processable and so require extensive manual effort to monitor them continuously to ensure data compliance. We have developed a cognitive framework to automatically parse and extract knowledge from legal documents and represent it using an Ontology. The legal ontology captures key-entities and their relations, the provenance of legal-policy and cross-referenced semantically similar legal facts and rules. We have applied this framework to the United States government’s Code of Federal Regulations (CFR) which includes facts and rules for individuals and organizations seeking to do business with the US Federal government.


2018 Mid-Atlantic Student Colloquium on Speech, Language and Learning

April 11th, 2018

2018 Mid-Atlantic Student Colloquium on Speech, Language and Learning

The 2018 Mid-Atlantic Student Colloquium on Speech, Language and Learning (MASC-SLL) is a student-run, one-day event on speech, language & machine learning research to be held at the University of Maryland, Baltimore County  (UMBC) from 10:00am to 6:00pm on Saturday May 12.  There is no registration charge and lunch and refreshments will be provided.  Students, postdocs, faculty and researchers from universities & industry are invited to participate and network with other researchers working in related fields.

Students and postdocs are encouraged to submit abstracts describing ongoing, planned, or completed research projects, including previously published results and negative results. Research in any field applying computational methods to any aspect of human language, including speech and learning, from all areas of computer science, linguistics, engineering, neuroscience, information science, and related fields is welcome. Submissions and presentations must be made by students or postdocs. Accepted submissions will be presented as either posters or talks.

Important Dates are:

  • Submission deadline (abstracts): April 16
  • Decisions announced: April 21
  • Registration opens: April 10
  • Registration closes: May 6
  • Colloquium: May 12

AI for Cybersecurity: Intrusion Detection Using Neural Networks

March 25th, 2018

AI for Cybersecurity: Intrusion Detection Using Neural Networks

Sowmya Ramapatruni, UMBC

11:00-12:00 Monday 26 March, 2018, ITE346, UMBC

The constant growth in the use of computer networks raised concerns about security and privacy. Intrusion attacks on computer networks is a very common attack on internet today. Intrusion detection systems have been considered essential in keeping network security and therefore have been commonly adopted by network administrators. A possible disadvantage is the fact that such systems are usually based on signature systems, which make them strongly dependent on updated database and consequently inefficient against novel attacks (unknown attacks). In this study we analyze the use of machine learning in the development of intrusion detection system.

The focus of this presentation is to analyze the various machine learning algorithms that can be used to perform classification of network attacks. We will also analyze the common techniques used to build and fine tune artificial neural networks for network attack classification and address the drawbacks in these systems. We will also analyze the data sets and the information that is critical for the classification. The understanding of network packet data is essential for the feature engineering, which is an essential precursor activity for any machine learning systems. Finally, we study the drawbacks of existing machine learning systems and walk through the further study possible in this area.


paper: Cleaning Noisy Knowledge Graphs

January 27th, 2018

Cleaning Noisy Knowledge Graphs

Ankur Padia, Cleaning Noisy Knowledge Graphs, Proceedings of the Doctoral Consortium at the 16th International Semantic Web Conference, October 2017.

My dissertation research is developing an approach to identify and explain errors in a knowledge graph constructed by extracting entities and relations from text. Information extraction systems can automatically construct knowledge graphs from a large collection of documents, which might be drawn from news articles, Web pages, social media posts or discussion forums. The language understanding task is challenging and current extraction systems introduce many kinds of errors. Previous work on improving the quality of knowledge graphs uses additional evidence from background knowledge bases or Web searches. Such approaches are diffuclt to apply when emerging entities are present and/or only one knowledge graph is available. In order to address the problem I am using multiple complementary techniques including entitylinking, common sense reasoning, and linguistic analysis.

 


Jennifer Sleeman receives AI for Earth grant from Microsoft

December 12th, 2017

Jennifer Sleeman receives AI for Earth grant from Microsoft

Visiting Assistant Professor Jennifer Sleeman (Ph.D. ’17)  has been awarded a grant from Microsoft as part of its ‘AI for Earth’ program. Dr. Sleeman will use the grant to continue her research on developing algorithms to model how scientific disciplines such as climate change evolve and predict future trends by analyzing the text of articles and reports and the papers they cite.

AI for Earth is a Microsoft program aimed at empowering people and organizations to solve global environmental challenges by increasing access to AI tools and educational opportunities, while accelerating innovation. Via the Azure for Research AI for Earth award program, Microsoft provides selected researchers and organizations access to its cloud and AI computing resources to accelerate, improve and expand work on climate change, agriculture, biodiversity and/or water challenges.

UMBC is among the first grant recipients of AI for Earth, first launched in July 2017. The grant process was a competitive and selective process and was awarded in recognition of the potential of the work and power of AI to accelerate progress.

As part of her dissertation research, Dr. Sleeman developed algorithms using dynamic topic modeling to understand influence and predict future trends in a scientific discipline. She applied this to the field of climate change and used assessment reports of the Intergovernmental Panel on Climate Change (IPCC) and the papers they cite. Since 1990, an IPCC report has been published every five years that includes four separate volumes, each of which has many chapters. Each report cites tens of thousands of research papers, which comprise a correlated dataset of temporally grounded documents. Her custom dynamic topic modeling algorithm identified topics for both datasets and apply cross-domain analytics to identify the correlations between the IPCC chapters and their cited documents. The approach reveals both the influence of the cited research on the reports and how previous research citations have evolved over time.

Dr. Sleeman’s award is part of an inaugural set of 35 grants in more than ten countries for access to Microsoft Azure and AI technology platforms, services and training.  In an post on Monday, AI for Earth can be a game-changer for our planet, Microsoft announced its intent to put $50 million over five years into the program, enabling grant-making and educational trainings possible at a much larger scale.

More information about AI for Earth can be found on the Microsoft AI for Earth website.


new paper: Discovering Scientific Influence using Cross-Domain Dynamic Topic Modeling

November 17th, 2017

Discovering Scientific Influence using Cross-Domain Dynamic Topic Modeling

Jennifer Sleeman, Milton Halem, Tim Finin and Mark Cane, Discovering Scientific Influence using Cross-Domain Dynamic Topic Modeling, International Conference on Big Data, IEEE, December 2017.

We describe an approach using dynamic topic modeling to model influence and predict future trends in a scientific discipline. Our study focuses on climate change and uses assessment reports of the Intergovernmental Panel on Climate Change (IPCC) and the papers they cite. Since 1990, an IPCC report has been published every five years that includes four separate volumes, each of which has many chapters. Each report cites tens of thousands of research papers, which comprise a correlated dataset of temporally grounded documents. We use a custom dynamic topic modeling algorithm to generate topics for both datasets and apply crossdomain analytics to identify the correlations between the IPCC chapters and their cited documents. The approach reveals both the influence of the cited research on the reports and how previous research citations have evolved over time. For the IPCC use case, the report topic model used 410 documents and a vocabulary of 5911 terms while the citations topic model was based on 200K research papers and a vocabulary more than 25K terms. We show that our approach can predict the importance of its extracted topics on future IPCC assessments through the use of cross domain correlations, Jensen-Shannon divergences and cluster analytics.