UMBC ebiquity
AI

Archive for the 'AI' Category

A hands-on introduction to TensorFlow and machine learning, 10am 3/28

March 18th, 2017, by Tim Finin, posted in events, Machine Learning, meetings

 

A Hands-on Introduction to TensorFlow and Machine Learning

Abhay Kashyap, UMBC ebiquity Lab

10:00-11:00am Tuesday, 28 March 2017, ITE346 ITE325b

As many of you know, TensorFlow is an open source machine learning library by Google which simplifies building and training deep neural networks that can take advantage of computers with GPUs. In this meeting, I will introduce some basic concepts of TensorFlow and machine learning in general. This will be a hands on tutorial where we will sit and code up some basic examples in TensorfFow. Specifically, we will use TensorFlow to implement linear regression, softmax classifiers and feed forward neural networks (MLP). You can find the Python notebooks here. If time permits, we will go over the implementation of the popular word2vec algorithm and introduce LSTMs to build language models.

What you need to know: Python and the basics of linear algebra and matrix operations. While it helps to know basics of machine learning, no prior knowledge will be assumed and there will be a gentle high level introduction to the algorithms we will implement.

What you need to bring: A laptop that has Python and pip installed. Having virtual environments set up on your computer is also a plus. (Warning: Windows-only users might be publicly shamed)

SemTk: The Semantics Toolkit from GE Global Research, 4/4

March 17th, 2017, by Tim Finin, posted in AI, KR, NLP, NLP, Ontologies, OWL, RDF, Semantic Web

The Semantics Toolkit

Paul Cuddihy and Justin McHugh
GE Global Research Center, Niskayuna, NY

10:00-11:00 Tuesday, 4 April 2017, ITE 346, UMBC

SemTk (Semantics Toolkit) is an open source technology stack built by GE Scientists on top of W3C Semantic Web standards.  It was originally conceived for data exploration and simplified query generation, and later expanded to a more general semantics abstraction platform. SemTk is made up of a Java API and microservices along with Javascript front ends that cover drag-and-drop query generation, path finding, data ingestion and the beginnings of stored procedure support.   In this talk we will give a tour of SemTk, discussing its architecture and direction, and demonstrate it’s features using the SPARQLGraph front-end hosted at http://semtk.research.ge.com.

Paul Cuddihy is a senior computer scientist and software systems architect in AI and Learning Systems at the GE Global Research Center in Niskayuna, NY. He earned an M.S. in Computer Science from Rochester Institute of Technology. The focus of his twenty-year career at GE Research has ranged from machine learning for medical imaging equipment diagnostics, monitoring and diagnostic techniques for commercial aircraft engines, modeling techniques for monitoring seniors living independently in their own homes, to parallel execution of simulation and prediction tasks, and big data ontologies.  He is one of the creators of the open source software “Semantics Toolkit” (SemTk) which provides a simplified interface to the semantic tech stack, opening its use to a broader set of users by providing features such as drag-and-drop query generation and data ingestion.  Paul has holds over twenty U.S. patents.

Justin McHugh is computer scientist and software systems architect working in the AI and Learning Systems group at GE Global Research in Niskayuna, NY. Justin attended the State University of New York at Albany where he earned an M.S in computer science. He has worked as a systems architect and programmer for large scale reporting, before moving into the research sector. In the six years since, he has worked on complex system integration, Big Data systems and knowledge representation/querying systems. Justin is one of the architects and creators of SemTK (the Semantics Toolkit), a toolkit aimed at making the power of the semantic web stack available to programmers, automation and subject matter experts without their having to be deeply invested in the workings of the Semantic Web.

new paper: App behavioral analysis using system calls

March 14th, 2017, by Tim Finin, posted in Datamining, Machine Learning, Mobile Computing, Security

Prajit Kumar Das, Anupam Joshi and Tim Finin, App behavioral analysis using system calls, MobiSec: Security, Privacy, and Digital Forensics of Mobile Systems and Networks, IEEE Conference on Computer Communications Workshops, May 2017.

System calls provide an interface to the services made available by an operating system. As a result, any functionality provided by a software application eventually reduces to a set of fixed system calls. Since system calls have been used in literature, to analyze program behavior we made an assumption that analyzing the patterns in calls made by a mobile application would provide us insight into its behavior. In this paper, we present our preliminary study conducted with 534 mobile applications and the system calls made by them. Due to a rising trend of mobile applications providing multiple functionalities, our study concluded, mapping system calls to functional behavior of a mobile application was not straightforward. We use Weka tool and manually annotated application behavior classes and system call features in our experiments to show that using such features achieves mediocre F1-measure at best, for app behavior classification. Thus leading to the conclusion that system calls were not sufficient features for app behavior classification.

SADL: Semantic Application Design Language

March 4th, 2017, by Tim Finin, posted in KR, Ontologies, OWL, RDF, Semantic Web

SADL – Semantic Application Design Language

Dr. Andrew W. Crapo
GE Global Research

 10:00 Tuesday, 7 March 2017

The Web Ontology Language (OWL) has gained considerable acceptance over the past decade. Building on prior work in Description Logics, OWL has sufficient expressivity to be useful in many modeling applications. However, its various serializations do not seem intuitive to subject matter experts in many domains of interest to GE. Consequently, we have developed a controlled-English language and development environment that attempts to make OWL plus rules more accessible to those with knowledge to share but limited interest in studying formal representations. The result is the Semantic Application Design Language (SADL). This talk will review the foundational underpinnings of OWL and introduce the SADL constructs meant to capture, validate, and maintain semantic models over their lifecycle.

 

Dr. Crapo has been part of GE’s Global Research staff for over 35 years. As an Information Scientist he has built performance and diagnostic models of mechanical, chemical, and electrical systems, and has specialized in human-computer interfaces, decision support systems, machine reasoning and learning, and semantic representation and modeling. His work has included a graphical expert system language (GEN-X), a graphical environment for procedural programming (Fuselet Development Environment), and a semantic-model-driven user-interface for decision support systems (ACUITy). Most recently Andy has been active in developing the Semantic Application Design Language (SADL), enabling GE to leverage worldwide advances and emerging standards in semantic technology and bring them to bear on diverse problems from equipment maintenance optimization to information security.

Large Scale Cross Domain Temporal Topic Modeling for Climate Change Research

December 23rd, 2016, by Tim Finin, posted in Big data, Machine Learning, NLP

Jennifer Sleeman, Milton Halem, Tim Finin, Mark Cane, Advanced Large Scale Cross Domain Temporal Topic Modeling Algorithms to Infer the Influence of Recent Research on IPCC Assessment Reports (poster), American Geophysical Union Fall Meeting 2016, American Geophysical Union, December 2016.

One way of understanding the evolution of science within a particular scientific discipline is by studying the temporal influences that research publications had on that discipline. We provide a methodology for conducting such an analysis by employing cross-domain topic modeling and local cluster mappings of those publications with the historical texts to understand exactly when and how they influenced the discipline. We apply our method to the Intergovernmental Panel on Climate Change (IPCC) Assessment Reports and the citations therein. The IPCC reports were compiled by thousands of Earth scientists and the assessments were issued approximately every five years over a 30 year span, and includes over 200,000 research papers cited by these scientists.

PhD Proposal: Understanding the Logical and Semantic Structure of Large Documents

December 9th, 2016, by Tim Finin, posted in Machine Learning, NLP, NLP, Ontologies

business documents

Dissertation Proposal

Understanding the Logical and Semantic
Structure of Large Documents 

Muhammad Mahbubur Rahman

11:00-1:00 Monday, 12 December 2016, ITE325b, UMBC

Up-to-the-minute language understanding approaches are mostly focused on small documents such as newswire articles, blog posts, product reviews and discussion forum entries. Understanding and extracting information from large documents such as legal documents, reports, business opportunities, proposals and technical manuals is still a challenging task. The reason behind this challenge is that the documents may be multi-themed, complex and cover diverse topics.

We aim to automatically identify and classify a document’s sections and subsections, infer their structure and annotate them with semantic labels to understand the semantic structure of a document. This document’s structure understanding will significantly benefit and inform a variety of applications such as information extraction and retrieval, document categorization and clustering, document summarization, fact and relation extraction, text analysis and question answering.

Committee: Drs. Tim Finin (Chair), Anupam Joshi, Tim Oates, Cynthia Matuszek, James Mayfield (JHU)

PhD Proposal: Ankur Padia, Dealing with Dubious Facts in Knowledge Graphs

November 29th, 2016, by Tim Finin, posted in KR, Machine Learning, NLP, NLP, Semantic Web

the skeptic

Dissertation Proposal

Dealing with Dubious Facts
in Knowledge Graphs

Ankur Padia

1:00-3:00pm Wednesday, 30 November 2016, ITE 325b, UMBC

Knowledge graphs are structured representations of facts where nodes are real-world entities or events and edges are the associations among the pair of entities. Knowledge graphs can be constructed using automatic or manual techniques. Manual techniques construct high quality knowledge graphs but are expensive, time consuming and not scalable. Hence, automatic information extraction techniques are used to create scalable knowledge graphs but the extracted information can be of poor quality due to the presence of dubious facts.

An extracted fact is dubious if it is incorrect, inexact or correct but lacks evidence. A fact might be dubious because of the errors made by NLP extraction techniques, improper design consideration of the internal components of the system, choice of learning techniques (semi-supervised or unsupervised), relatively poor quality of heuristics or the syntactic complexity of underlying text. A preliminary analysis of several knowledge extraction systems (CMU’s NELL and JHU’s KELVIN) and observations from the literature suggest that dubious facts can be identified, diagnosed and managed. In this dissertation, I will explore approaches to identify and repair such dubious facts from a knowledge graph using several complementary approaches, including linguistic analysis, common sense reasoning, and entity linking.

Committee: Drs. Tim Finin (Chair), Anupam Joshi, Tim Oates, Paul McNamee (JHU), Partha Talukdar (IISc, India)

Understanding Large Documents

November 28th, 2016, by Tim Finin, posted in Machine Learning, NLP

business documents

In this week’s ebiquity meeting, Muhammad Mahbubur Rahman will about about his work on understanding large documents, such as business RFPs.

Large Document Understanding

Muhammad Mahbubur Rahman

Up-to-the-minute language understanding approaches are mostly focused on small documents such as newswire articles, blog posts, product reviews and discussion forum entries. Understanding and extracting information from large documents such as legal documents, reports, business opportunities, proposals and technical manuals is still a challenging task. The reason behind this challenge is that the documents may be multi-themed, complex and cover diverse topics.

We aim to automatically identify and classify a document’s sections and subsections, infer their structure and annotate them with semantic labels to understand the semantic structure of a document. This document’s structure understanding will significantly benefit and inform a variety of applications such as information extraction and retrieval, document categorization and clustering, document summarization, fact and relation extraction, text analysis and question answering.

PhD proposal: Sandeep Nair Narayanan, Cognitive Analytics Framework to Secure Internet of Things

November 26th, 2016, by Tim Finin, posted in cybersecurity, IoT, Machine Learning

cognitive car

Dissertation Proposal

Cognitive Analytics Framework to Secure Internet of Things

Sandeep Nair Narayanan

1:00-3:30pm, Monday, 28 November 2016, ITE 325b

Recent years have seen the rapid growth and widespread adoption of Internet of Things in a wide range of domains including smart homes, healthcare, automotive, smart farming and smart grids. The IoT ecosystem consists of devices like sensors, actuators and control systems connected over heterogeneous networks. The connected devices can be from different vendors with different capabilities in terms of power requirements, processing capabilities, etc. As such, many security features aren’t implemented on devices with lesser processing capabilities. The level of security practices followed during their development can also be different. Lack of over the air update for firmware also pose a very big security threat considering their long-term deployment requirements. Device malfunctioning is yet another threat which should be considered. Hence, it is imperative to have an external entity which monitors the ecosystem and detect attacks and anomalies.

In this thesis, we propose a security framework for IoTs using cognitive techniques. While anomaly detection has been employed in various domains, some challenges like online approach, resource constraints, heterogeneity, distributed data collection etc. are unique to IoTs and their predecessors like wireless sensor networks. Our framework will have an underlying knowledge base which has the domain-specific information, a hybrid context generation module which generates complex contexts and a fast reasoning engine which does logical reasoning to detect anomalous activities. When raw sensor data arrives, the hybrid context generation module queries the knowledge base and generates different simple local contexts using various statistical and machine learning models. The inferencing engine will then infer global complex contexts and detects anomalous activities using knowledge from streaming facts and and domain specific rules encoded in the Ontology we will create. We will evaluate our techniques by realizing and validating them in the vehicular domain.

Committee: Drs. Dr. Anupam Joshi (Chair), Dr. Tim Finin, Dr. Nilanjan Banerjee, Dr. Yelena Yesha, Dr. Wenjia Li, NYIT, Dr. Filip Perich, Google

Knowledge for Cybersecurity

October 17th, 2016, by Tim Finin, posted in cybersecurity, KR

In this weeks ebiquity meeting (11:30am 10/18, ITE346), Sudip Mittal will talk on Knowledge for Cybersecurity.

In the broad domain of security, analysts and policy makers need knowledge about the state of the world to make critical decisions, operational/tactical as well as strategic. This knowledge has to be extracted from different sources, and then represented in a form that will enable further analysis and decision making. Some of this data underlying this knowledge is in textual sources traditionally associated with Open Sources Intelligence (OSINT), others in data that is present in hidden sources like dark web vulnerability markets. Today, this is a mostly manual process. We wish to automate this problem by taking data from a variety of sources, extracting, representing and integrating the knowledge present, and then use the resulting knowledge graph to create various semantic agents that add value to the cybersecurity infrastructure.

Streamlining Management of Multiple Cloud Services

May 22nd, 2016, by Tim Finin, posted in cloud computing, KR, Ontologies, Semantic Web

cloudhandshake

Aditi Gupta, Sudip Mittal, Karuna Pande Joshi, Claudia Pearce and Anupam Joshi, Streamlining Management of Multiple Cloud Services, IEEE International Conference on Cloud Computing, June 2016.

With the increase in the number of cloud services and service providers, manual analysis of Service Level Agreements (SLA), comparison between different service offerings and conformance regulation has become a difficult task for customers. Cloud SLAs are policy documents describing the legal agreement between cloud providers and customers. SLA specifies the commitment of availability, performance of services, penalties associated with violations and procedure for customers to receive compensations in case of service disruptions. The aim of our research is to develop technology solutions for automated cloud service management using Semantic Web and Text Mining techniques. In this paper we discuss in detail the challenges in automating cloud services management and present our preliminary work in extraction of knowledge from SLAs of different cloud services. We extracted two types of information from the SLA documents which can be useful for end users. First, the relationship between the service commitment and financial credit. We represented this information by enhancing the existing Cloud service ontology proposed by us in our previous research. Second, we extracted rules in the form of obligations and permissions from SLAs using modal and deontic logic formalizations. For our analysis, we considered six publicly available SLA documents from different cloud computing service providers.

talk: Topic Modeling for Analyzing Document Collection, 11am Mon 3/16

May 12th, 2016, by Tim Finin, posted in Datamining, High performance computing, Machine Learning, NLP

Ogihara

Topic Modeling for Analyzing Document Collection

Mitsunori Ogihara
Computer Science, University of Miami

11:00am Monday, 16 May 2016, ITE 325b, UMBC

Topic modeling (in particular, Latent Dirichlet Analysis) is a technique for analyzing a large collection of documents. In topic modeling we view each document as a frequency vector over a vocabulary and each topic as a static distribution over the vocabulary. Given a desired number, K, of document classes, a topic modeling algorithm attempts to estimate concurrently K static distributions and for each document how much each K class contributes. Mathematically, this is the problem of approximating the matrix generated by stacking the frequency vectors into the product of two non-negative matrices, where both the column dimension of the first matrix and the row dimension of the second matrix are equal to K. Topic modeling is gaining popularity recently, for analyzing large collections of documents.

In this talk I will present some examples of applying topic modeling: (1) a small sentiment analysis of a small collection of short patient surveys, (2) exploratory content analysis of a large collection of letters, (3) document classification based upon topics and other linguistic features, and (4) exploratory analysis of a large collection of literally works. I will speak not only the exact topic modeling steps but also all the preprocessing steps for preparing the documents for topic modeling.

Mitsunori Ogihara is a Professor of Computer Science at the University of Miami, Coral Gables, Florida. There he directs the Data Mining Group in the Center for Computational Science, a university-wide organization for providing resources and consultation for large-scale computation. He has published three books and approximately 190 papers in conferences and journals. He is on the editorial board for Theory of Computing Systems and International Journal of Foundations of Computer Science. Ogihara received a Ph.D. in Information Sciences from Tokyo Institute of Technology in 1993 and was a tenure-track/tenured faculty member in the Department of Computer Science at the University of Rochester from 1994 to 2007.

You are currently browsing the archives for the AI category.

  Home | Archive | Login | Feed