UMBC ebiquity
Machine Learning

Archive for the 'Machine Learning' Category

Down the rabbit hole: An Android system call study, 10:30am Mon 3/28

March 27th, 2016, by Tim Finin, posted in cybersecurity, Machine Learning, Mobile Computing, Security

Down the rabbit hole: An Android system call study

Prajit Kumar Das

10:30 am, Monday, March 28, 2016 ITE 346

App permissions and application sandboxing are the fundamental security mechanisms that protects user data on mobile platforms. We have worked on permission analytics before and come to a conclusion that just studying an app’s requested access rights (permissions) isn’t enough to understand potential data breaches. Techniques like privilege escalation have been previously used to gain further access to user and her data on mobile platforms like Android. Static code analysis and dynamic code execution may be studied to gather further insight into an app’s behavior. However, there is a need to study such a behavior at the lowest level of code execution and that is system calls. The system call is the fundamental interface between an application and the Linux kernel. In our current project, we are studying system calls made by apps for gathering a better understanding of their behavior.

Image description using deep neural networks

February 27th, 2016, by Tim Finin, posted in AI, Machine Learning, NLP

Image description using deep neural networks

Sunil Gandhi
10:30 am, Monday, February 29, 2016 ITE 346

With the explosion of image data on the internet, there has been a need for automatic generation of image descriptions. In this project we use deep neural networks for extracting vectors from images and we use them to generate text that describes the image. The model that we built makes use of the pre-trained VGGNET- a model for image classification and a recurrent neural network (RNN) for language modelling. The combination of the two neural networks provides a multimodal embedding between image vectors and word vectors. We trained the model on 8000 images from the Flickr8k dataset and we present our results on test images downloaded from the Internet. We provide a web-service for image description generation that takes the image URL as input and provides image description and image categories as output. Through our service, a user can correct the description automatically generated by the system so that we can improve our model using corrected description.

Sunil Gandhi is a Computer Science Ph.D. student at UMBC who is part of the  Cognition Robotics and Learning Lab (CORAL) research lab.

Developmental Memetic Algorithms: A Fast and Efficient Approach for Optimization Applications

February 15th, 2016, by Tim Finin, posted in Machine Learning

Developmental Memetic Algorithms: A Fast and
Efficient Approach for Optimization Applications

Ramin Ayanzadeh
10:30am, Monday, 22 February 2016, ITE 346

A Memetic algorithm, as a hybrid strategy, is an intelligent optimization method in problem solving. These algorithms are similar in nature to genetic algorithms as they follow evolutionary strategies, but they also incorporate a refinement phase during which they learn about the problem and search space. The efficiency of these algorithms depends on the nature and architecture of the imitation operator used. In this presentation, after a brief introduction, pros and cons of employing memetic algorithms would be discussed. Afterwards, developmental memetic algorithms will be proposed as an approach for subsiding the costs of using standard memetic algorithms. Developmental memetic algorithm is an adaptive memetic algorithm that has been developed in which the influence factor of environment on the learning abilities of each individual is set adaptively. This translates into a level of autonomous behavior, after a while that individuals gain some experience. Simulation results on benchmark function proved that this adaptive approach can increase the quality of the results and decrease the computation time simultaneously. The adaptive memetic algorithm also shows better stability when compared with the classic memetic algorithm.

Using Data Analytics to Detect Anomalous States in Vehicles

December 28th, 2015, by Tim Finin, posted in Big data, cybersecurity, Datamining, Machine Learning, Security

 

Sandeep Nair, Sudip Mittal and Anupam Joshi, Using Data Analytics to Detect Anomalous States in Vehicles, Technical Report, December 2015.

Vehicles are becoming more and more connected, this opens up a larger attack surface which not only affects the passengers inside vehicles, but also people around them. These vulnerabilities exist because modern systems are built on the comparatively less secure and old CAN bus framework which lacks even basic authentication. Since a new protocol can only help future vehicles and not older vehicles, our approach tries to solve the issue as a data analytics problem and use machine learning techniques to secure cars. We develop a hidden markov model to detect anomalous states from real data collected from vehicles. Using this model, while a vehicle is in operation, we are able to detect and issue alerts. Our model could be integrated as a plug-n-play device in all new and old cars.

Assessing credibility of content on Twitter using automated techniques

November 29th, 2015, by Tim Finin, posted in Machine Learning, Semantic Web, Social media, Web

Aditi Gupta

10:30am, Monday 30 November 2015, ITE 346

Online social media is a powerful platform for dissemination of information during real world events. Beyond the challenges of volume, variety and velocity of content generated on online social media, veracity poses a much greater challenge for effective utilization of this content by citizens, organizations, and authorities. Veracity of information refers to the trustworthiness /credibility / accuracy / completeness of the content. This work addressed the challenge of veracity or trustworthiness of content posted on social media.  We focus our work on Twitter, which is one of the most popular microblogging web service today. We provided an in-depth analysis of misinformation spread on Twitter during real world events. We showed effectiveness of automated techniques to detect misinformation on Twitter using a combination of content, meta-data, network, user profile and temporal features. We developed and deployed a novel framework, TweetCred for providing indication of trustworthiness / credibility of tweets posted during events. TweetCred, which was available as a browser plug-in, was installed and used by real Twitter users.

Dr. Aditi Gupta is a research associate in the Computer Science and Electrical Engineering Department at UMBC.  She received her Ph.D. from the Indraprastha Institute of Information Technology, Delhi  (IIIT-Delhi) in 2105 for her dissertation on designing and evaluating techniques to mitigate misinformation spread on microblogging web services.

Semantic Interpretation of Structured Log Files

November 21st, 2015, by Tim Finin, posted in Machine Learning, Semantic Web

 

Piyush Nimbalkar, Semantic Interpretation of Structured Log Files, M.S. thesis, University of Maryland, Baltimore County, August, 2015.

Log files comprise a record of different events happening in various applications, operating systems and even in network devices. Originally they were used to record information for diagnostic and debugging purposes. Nowadays, logs are also used to track events which can be used in auditing and forensics in case of malicious activities or systems attacks. Various softwares like intrusion detection systems, web servers, anti-virus and anti-malware systems, firewalls and network devices generate logs with useful information, that can be used to protect against such system attacks. Analyzing log files can help in pro- actively avoiding attacks against the systems. While there are existing tools that do a good job when the format of log files is known, the challenge lies in cases where log files are from unknown devices and of unknown formats. We propose a framework that takes any log file and automatically gives out a semantic interpretation as a set of RDF Linked Data triples. The framework splits a log file into columns using regular expression-based or dictionary-based classifiers. Leveraging and modifying our existing work on inferring the semantics of tables, we identify every column from a log file and map it to concepts either from a general purpose KB like DBpedia or domain specific ontologies such as IDS. We also identify relationships between various columns in such log files. Converting large and verbose log files into such semantic representations will help in better search, integration and rich reasoning over the data.

talk: Introduction to Deep Learning

November 20th, 2015, by Tim Finin, posted in Machine Learning, NLP

Introduction to Deep Learning

Zhiguang Wang and Hang Gao

10:00am Monday, 23 November 2015, ITE 346

Deep learning has been a hot topic and all over the news lately. It is introduced with the ambition of moving Machine Learning closer to Artificial Intelligence, one of its original goals. Since the introduction of the concept of deep learning, various relevant algorithms are proposed and have achieved significant success in their corresponding areas. This talk aims at providing a brief overview of most common deep learning algorithms, along with their application on different tasks.

In this talk, Steve (Zhiguang Wang) will give a brief introduction about the application of deep learning algorithms in computer vision and speech, some basic viewpoints about training methods and attacking the non-convexity in deep neural nets along with some misc about deep learning.

On the other hand, Hang Gao will talk about common application of deep learning algorithms in Natural Language Processing, covering semantic, syntactic and sentiment analysis. He will also give a discussion on the limits of current application of deep learning algorithms in NLP and provide some ideas on possible future trend.

Lyrics Augmented Multi-modal Music Recommendation

October 29th, 2015, by Tim Finin, posted in Machine Learning, NLP, RDF, Semantic Web

Lyrics Augmented Multi-modal
Music Recommendation

Abhay Kashyap

1:00pm Friday 30 October, ITE 325b

In an increasingly mobile and connected world, digital music consumption has rapidly increased. More recently, faster and cheaper mobile bandwidth has given the average mobile user the potential to access large troves of music through streaming services like Spotify and Google Music that boast catalogs with tens of millions of songs. At this scale, effective music recommendation is critical for music discovery and personalized user experience.

Recommenders that rely on collaborative information suffer from two major problems: the long tail problem, which is induced by popularity bias, and the cold start problem caused by new items with no data. In such cases, they fall back on content to compute similarity. For music, content based features can be divided into acoustic and textual domains. Acoustic features are extracted from the audio signal while textual features come from song metadata, lyrical content, collaborative tags and associated web text.

Research in content based music similarity has largely been focused in the acoustic domain while text based features have been limited to metadata, tags and shallow methods for web text and lyrics. Song lyrics house information about the sentiment and topic of a song that cannot be easily extracted from the audio. Past work has shown that even shallow lyrical features improved audio-only features and in some tasks like mood classification, outperformed audio-only features. In addition, lyrics are also easily available which make them a valuable resource and warrant a deeper analysis.

The goal of this research is to fill the lyrical gap in existing music recommender systems. The first step is to build algorithms to extract and represent the meaning and emotion contained in the song’s lyrics. The next step is to effectively combine lyrical features with acoustic and collaborative information to build a multi-modal recommendation engine.

For this work, the genre is restricted to Rap because it is a lyrics-centric genre and techniques built for Rap can be generalized to other genres. It was also the highest streamed genre in 2014, accounting for 28.5% of all music streamed. Rap lyrics are scraped from dedicated lyrics websites like ohhla.com and genius.com while the semantic knowledge base comprising artists, albums and song metadata come from the MusicBrainz project. Acoustic features are directly used from EchoNest while collaborative information like tags, plays, co-plays etc. come from Last.fm.

Preliminary work involved extraction of compositional style features like rhyme patterns and density, vocabulary size, simile and profanity usage from over 10,000 songs by over 150 artists. These features are available for users to browse and explore through interactive visualizations on Rapalytics.com. Song semantics were represented using off-the-shelf neural language based vector models (doc2vec). Future work will involve building novel language models for lyrics and latent representations for attributes that is driven by collaborative information for multi-modal recommendation.

Committee: Drs. Tim Finin (Chair), Anupam Joshi, Pranam Kolari (WalmartLabs), Cynthia Matuszek and Tim Oates

Demystifying Word2Vec: A Hands-on Tutorial

October 16th, 2015, by Tim Finin, posted in Big data, Machine Learning, NLP, NLP

Demystifying Word2Vec – A Hands-on Tutorial

Abhay Kashyap

10:30am Monday, 19 October 2015 **ITE 456**

In the world of NLP, Word2Vec is one of the coolest kids in town! But what exactly is it and how does it work? More importantly, how is it used/useful?

For the first 10-15 minutes, we will go over distributional an distributed representation of words and the neural language model behind Word2Vec. We will also briefly look at doc2vec, the extension of Word2Vec for longer pieces of text.

For the remainder of the time (45-60 minutes), we will get our feet wet by running Word2Vec on a dataset which will then be followed by discussions about potential ways it can be useful for your own work.

What to bring – Any computing machine with Python installed, lots of curiosity and some delicious snacks for me maybe? We will use the excellent gensim package for python to run Word2Vec along with cython to speed things up. If you aren’t familiar with Python or don’t like it, no worries! It’s really just 5-6 lines of code! The training dataset will be provided. If you wish to bring your own, that’s cool too.

NOTE: We will hold this week’s Ebiquity meeting in ITE 456.

talk: Is your personal data at risk? App analytics to the rescue

September 26th, 2015, by Tim Finin, posted in cybersecurity, Machine Learning, Privacy, Security


Is your personal data at risk?
App analytics to the rescue

Prajit Kumar Das

10:30am Monday, 28 September 28 2015, ITE346

According to Virustotal, a prominent virus and malware tool, the Google Play Store has a few thousand apps from major malware families. Given such a revelation, access control systems for mobile data management, have reached a state of critical importance. We propose the development of a system which would help us detect the pathways using which user’s data is being stolen from their mobile devices. We use a multi layered approach which includes app meta data analysis, understanding code patterns and detecting and eventually controlling dynamic data flow when such an app is installed on a mobile device. In this presentation we focus on the first part of our work and discuss the merits and flaws of our unsupervised learning mechanism to detect possible malicious behavior from apps in the Google Play Store.

SVM-CASE: An SVM-based Context Aware Security Framework for Vehicular Ad-hoc Networks

September 1st, 2015, by Tim Finin, posted in cybersecurity, Machine Learning

vanet500

Wenjia Li, Anupam Joshi and Tim Finin, SVM-CASE: An SVM-based Context Aware Security Framework for Vehicular Ad-hoc Networks, IEEE 82nd Vehicular Technology Conf., Boston, Sept. 2015.

Vehicular Ad-hoc Networks (VANETs) are known to be very susceptible to various malicious attacks. To detect and mitigate these malicious attacks, many security mechanisms have been studied for VANETs. In this paper, we propose a context aware security framework for VANETs that uses the Support Vector Machine (SVM) algorithm to automatically determine the boundary between malicious nodes and normal ones. Compared to the existing security solutions for VANETs, The proposed framework is more resilient to context changes that are common in VANETs, such as those due to malicious nodes altering their attack patterns over time or rapid changes in environmental factors, such as the motion speed and transmission range. We compare our framework to existing approaches and present evaluation results obtained from simulation studies.

Platys: From Position to Place-Oriented Mobile Computing

June 8th, 2015, by Tim Finin, posted in AI, KR, Machine Learning, Mobile Computing, Ontologies

The NSF-sponsored Platys project explored the idea that places are more than just GPS coordinates. They are concepts rich with semantic information, including people, activities, roles, functions, time and purpose. Our mobile phones can learn to recognize the places we are in and use information about them to provide better services.

Laura Zavala, Pradeep K. Murukannaiah, Nithyananthan Poosamani, Tim Finin, Anupam Joshi, Injong Rhee and Munindar P. Singh, Platys: From Position to Place-Oriented Mobile Computing, AI Magazine, v36, n2, 2015.

The Platys project focuses on developing a high-level, semantic notion of location called place. A place, unlike a geospatial position, derives its meaning from a user’s actions and interactions in addition to the physical location where it occurs. Our aim is to enable the construction of a large variety of applications that take advantage of place to render relevant content and functionality and, thus, improve user experience. We consider elements of context that are particularly related to mobile computing. The main problems we have addressed to realize our place-oriented mobile computing vision are representing places, recognizing places, and engineering place-aware applications. We describe the approaches we have developed for addressing these problems and related subproblems. A key element of our work is the use of collaborative information sharing where users’ devices share and integrate knowledge about places. Our place ontology facilitates such collaboration. Declarative privacy policies allow users to specify contextual features under which they prefer to share or not share their information.

You are currently browsing the archives for the Machine Learning category.

  Home | Archive | Login | Feed