paper: Temporal Understanding of Cybersecurity Threats

May 28th, 2020
Click to view this narrated presentation from the conference

Temporal Understanding of Cybersecurity Threats


Jennifer Sleeman, Tim Finin, and Milton Halem, Temporal Understanding of Cybersecurity Threats, IEEE International Conference on Big Data Security on Cloud, May 2020.

As cybersecurity-related threats continue to increase, understanding how the field is changing over time can give insight into combating new threats and understanding historical events. We show how to apply dynamic topic models to a set of cybersecurity documents to understand how the concepts found in them are changing over time. We correlate two different data sets, the first relates to specific exploits and the second relates to cybersecurity research. We use Wikipedia concepts to provide a basis for performing concept phrase extraction and show how using concepts to provide context improves the quality of the topic model. We represent the results of the dynamic topic model as a knowledge graph that could be used for inference or information discovery.


Defense: Taneeya Satyapanich, Modeling and Extracting Information about Cybersecurity Events from Text

November 14th, 2019

Ph.D. Dissertation Defense

Modeling and Extracting Information about Cybersecurity Events from Text

Taneeya Satyapanich

9:30-11:30 Monday, 18 November, 2019, ITE346?

People now rely on the Internet to carry out much of their daily activities such as banking, ordering food, and socializing with their family and friends. The technology facilitates our lives, but also comes with many problems, including cybercrimes, stolen data, and identity theft. With the large and increasing number of transactions done every day, the frequency of cybercrime events is also growing. Since the number of security-related events is too high for manual review and monitoring, we need to train machines to be able to detect and gather data about potential cyber threats. To support machines that can identify and understand threats, we need standard models to store the cybersecurity information and information extraction systems that can collect information to populate the models with data from text.

This dissertation makes two significant contributions. First, we defined rich cybersecurity event schema and annotated the news corpus following the schema. Our schema consists of event type definitions, semantic roles, and event arguments. Second, we present CASIE, a cybersecurity event extraction system. CASIE can detect cybersecurity events, identify event participants and their roles, including specifying realis values. It also groups the events, which are coreference.  CASIE produces output in easy to use format as a JSON object.

We believe that this dissertation will be useful for cybersecurity management in the future. It will quickly grasp cybersecurity event information out of the unstructured text and fill in the event frame. So we can compete with tons of cybersecurity events that happen every day.

Committee: Drs. Tim Finin (chair), Anupam Joshi, Tim Oates, Karuna Pande Joshi, Francis Ferraro


TALK: Real-time knowledge extraction from short semi-structured documents

November 3rd, 2019

A semantically rich framework to enable real-time knowledge extraction from short length semi-structured documents

Lavana Elluri

10:30-11:30 Monday, 4 November 2019, ITE346

Knowledge is currently maintained as a large volume of unstructured text data in books, laws, regulations and policies, news and social media, academic and scientific reports, conversation and correspondence, etc. Most of these text documents are not often machine-processable. Hence it is hard to find relevant information from these texts quickly. Extracting and categorizing knowledge from the text of these numerous text stores requires significant manual effort and time. A critical open challenge that we propose to address is automated incremental text classification and identifying context from small documents. Our aim is to develop a semantically rich framework, including algorithms that will extract and classify the context of the text in real-time, to help enable users that update their policies regularly and organizations that are submitting proposals. We will use techniques from deep learning, semantic web, and natural language processing to build this framework. Our objectives include representing knowledge in cloud compliance / legal texts to create and populate a knowledge graph based on data protection regulations. Additionally, we will also correlate rules implemented in the referencing document with the rules in original policies to determine context similarity.


TALK: Automated Data Augmentation via Wikidata Relationships

October 20th, 2019

Automated Data Augmentation via Wikidata Relationships

Oyesh Singh, UMBC
10:30-11:30 Monday, 21 October 2019, ITE 346

With the increase in complexity of machine learning models, there is more need for data than ever. In order to fill this gap of annotated data-scarce situation, we look towards the ocean of free data present in Wikipedia and other WIkimedia resources. Wikipedia has an enormous amount of data in many languages along with the knowledge graph defined in Wikidata. In this presentation, I will explain how we utilized the Wikipedia/Wikidata data to boost the performance of BERT models for named entity recognition.


AAAI Symposium on Privacy-Enhancing AI and HLT Technologies

July 31st, 2018

PAL: Privacy-Enhancing AI and Language Technologies

AAAI Spring Symposium
25-27 March 2019, Stanford University

This symposium will bring together researchers in privacy and researchers in either artificial intelligence (AI) or human language technologies (HLTs), so that we may collectively assess the state of the art in this growing intersection of interests. Privacy remains an evolving and nuanced concern of computer users, as new technologies that use the web, smartphones, and the internet of things (IoT) collect a myriad of personal information. Rather than viewing AI and HLT as problems for privacy, the goal of this symposium is to “flip the script” and explore how AI and HLT can help meet users’ desires for privacy when interacting with computers.

It will focus on two loosely-defined research questions:

  • How can AI and HLT preserve or protect privacy in challenging situations?
  • How can AI and HLT help interested parties (e.g., computer users, companies, regulatory agencies) understand privacy in the status quo and what people want?

The symposium will consist of invited speakers, oral presentations of submitted papers, a poster session, and panel discussions. This event is a successor to Privacy and Language Technologies (“PLT”), a 2016 AAAI Fall Symposium. Submissions are due 2 November 2018.  For more information, see the symposium site.


paper: Ontology-Grounded Topic Modeling for Climate Science Research

July 24th, 2018

 

Ontology-Grounded Topic Modeling for Climate Science Research

 

Jennifer Sleeman, Milton Halem and Tim Finin, Ontology-Grounded Topic Modeling for Climate Science Research, Semantic Web for Social Good Workshop, Int. Semantic Web Conf., Monterey, Oct. 2018. (Selected as best paper), to appear, Emerging Topics in Semantic Technologies, E. Demidova, A.J. Zaveri, E. Simperl (Eds.), AKA Verlag Berlin, 2018.

 

In scientific disciplines where research findings have a strong impact on society, reducing the amount of time it takes to understand, synthesize and exploit the research is invaluable. Topic modeling is an effective technique for summarizing a collection of documents to find the main themes among them and to classify other documents that have a similar mixture of co-occurring words. We show how grounding a topic model with an ontology, extracted from a glossary of important domain phrases, improves the topics generated and makes them easier to understand. We apply and evaluate this method to the climate science domain. The result improves the topics generated and supports faster research understanding, discovery of social networks among researchers, and automatic ontology generation.


paper: Understanding and representing the semantics of large structured documents

July 23rd, 2018

Understanding and representing the semantics of large structured documents

 

Muhammad Mahbubur Rahman and Tim Finin, Understanding and representing the semantics of large structured documents, Proceedings of the 4th Workshop on Semantic Deep Learning (SemDeep-4, ISWC), 8 October 2018.

 

Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arXiv and Request for Proposal documents.


MS defense: Open Information Extraction for Code-Mixed Hindi-English Social Media Data

July 1st, 2018

MS Thesis Defense

Open Information Extraction for Code-Mixed Hindi-English Social Media Data

Mayur Pate

1:00pm Monday, 2 July 2018, ITE 325b, UMBC

Open domain relation extraction (Angeli, Premkumar, & Manning 2015) is a process of finding relation triples. While there are a number of available systems for open information extraction (Open IE) for a single language, traditional Open IE systems are not well suited to content that contains multiple languages in a single utterance. In this thesis, we have extended a existing code mix corpus (Das, Jamatia, & Gambck 2015) by finding and annotating relation triples in Open IE fashion. Using this newly annotated corpus, we have experimented with seq2seq neural network (Zhang, Duh, & Durme 2017) for finding the relationship triples. As prerequisite for relationship extraction pipeline, we have developed part-of-speech tagger and named entity and predicate recognizer for code-mix content. We have experimented with various approaches such as Conditional Random Fields (CRF), Average Perceptron and deep neural networks. According to our knowledge, this relationship extraction system is first ever contribution for any codemix natural language. We have achieved promising results for all of the components and it could be improved in future with more codemix data.

Committee: Drs. Frank Ferraro (Chair), Tim Finin, Hamed Pirsiavash, Bryan Wilkinson


PhD defense: Understanding the Logical and Semantic Structure of Large Documents

May 29th, 2018

Dissertation Defense

Understanding the Logical and Semantic Structure of Large Documents

Muhammad Mahbubur Rahman

11:00am Wednesday, 30 May 2018, ITE 325b

Understanding and extracting of information from large documents, such as business opportunities, academic articles, medical documents and technical reports poses challenges not present in short documents. The reasons behind this challenge are that large documents may be multi-themed, complex, noisy and cover diverse topics. This dissertation describes a framework that can analyze large documents, and help people and computer systems locate desired information in them. It aims to automatically identify and classify different sections of documents and understand their purpose within the document. A key contribution of this research is modeling and extracting the logical and semantic structure of electronic documents using deep learning techniques. The effectiveness and robustness of ?the framework is evaluated through extensive experiments on arXiv and requests for proposals datasets.

Committee Members: Drs. Tim Finin (Chair), Anupam Joshi, Tim Oates, Cynthia Matuszek, James Mayfield (JHU)


Preventing Poisoning Attacks on Threat Intelligence Systems

April 22nd, 2018

Preventing Poisoning Attacks on Threat Intelligence Systems

Nitika Khurana, Graduate Student, UMBC

11:00-12:00 Monday, 23 April 2018, ITE346, UMBC

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We also maintain the provenance of the output generated to ensure information and source reliability and identify the background data that caused an attack. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.


UMBC at SemEval-2018 Task 8: Understanding Text about Malware

April 21st, 2018

UMBC at SemEval-2018 Task 8: Understanding Text about Malware

UMBC at SemEval-2018 Task 8: Understanding Text about Malware

Ankur Padia, Arpita Roy, Taneeya Satyapanich, Francis Ferraro, Shimei Pan, Anupam Joshi and Tim Finin, UMBC at SemEval-2018 Task 8: Understanding Text about Malware, Int. Workshop on Semantic Evaluation (collocated with NAACL-HLT), New Orleans, LA, June 2018.

 

We describe the systems developed by the UMBC team for 2018 SemEval Task 8, SecureNLP (Semantic Extraction from CybersecUrity REports using Natural Language Processing). We participated in three of the sub-tasks: (1) classifying sentences as being relevant or irrelevant to malware, (2) predicting token labels for sentences, and (4) predicting attribute labels from the Malware Attribute Enumeration and Characterization vocabulary for defining malware characteristics. We achieved F1 scores of 50.34/18.0 (dev/test), 22.23 (test-data), and 31.98 (test-data) for Task1, Task2 and Task2 respectively. We also make our cybersecurity embeddings publicly available at https://bit.ly/cybr2vec.


Cognitively Rich Framework to Automate Extraction & Representation of Legal Knowledge

April 15th, 2018

Cognitively Rich Framework to Automate Extraction and Representation of Legal Knowledge

Srishty Saha, UMBC
11-12 Monday, 16 April 2018, ITE 346

With the explosive growth in cloud-based services, businesses are increasingly maintaining large datasets containing information about their consumers to provide a seamless user experience. To ensure privacy and security of these datasets, regulatory bodies have specified rules and compliance policies that must be adhered to by organizations. These regulatory policies are currently available as text documents that are not machine processable and so require extensive manual effort to monitor them continuously to ensure data compliance. We have developed a cognitive framework to automatically parse and extract knowledge from legal documents and represent it using an Ontology. The legal ontology captures key-entities and their relations, the provenance of legal-policy and cross-referenced semantically similar legal facts and rules. We have applied this framework to the United States government’s Code of Federal Regulations (CFR) which includes facts and rules for individuals and organizations seeking to do business with the US Federal government.