UMBC ebiquity
GENERAL

Archive for the 'GENERAL' Category

Dealing with Dubious Facts in Knowledge Graphs

November 22nd, 2016, by Tim Finin, posted in GENERAL

In this week’s meeting, Ankur Padia will about about his work on the problem of identifying and managing ‘dubious facts’ extracted from text and added to a knowledge graph.

Dealing with Dubious Facts in Knowledge Graphs

Ankur Padia

Knowledge graphs are used to represent real-world facts and events with entities as nodes and relations as labeled edges. Generally, a knowledge graph is automatically constructed by extracting facts from text corpus using information extraction (IE) techniques. Such IE techniques are scalable but often extract low quality (or dubious) facts due to errors caused by NLP libraries, internal components of an extraction system, choice of learning techniques, heuristics and syntactic complexity of underlying text. We wish to explore techniques to process such dubious facts and improve the quality of a knowledge graph.

Dynamic Topic Modeling to Infer the Influence of Research Citations on IPCC Assessment Reports

November 19th, 2016, by Tim Finin, posted in GENERAL

IPCC Assessment Reports

A temporal analysis of the 200,000 documents cited in thirty years worth of Intergovernmental Panel on Climate Change (IPCC) assessment reports sheds light on how climate change research is evolving.

Jenifer Sleeman, Milton Halem, Tim Finin and Mark Cane, Dynamic Topic Modeling to Infer the Influence of Research Citations on IPCC Assessment Reports, Big Data Challenges, Research, and Technologies in the Earth and Planetary Sciences Workshop, IEEE Int. Conf. on Big Data, December 2016.

A common Big Data problem is the need to integrate large temporal data sets from various data sources into one comprehensive structure. Having the ability to correlate evolving facts between data sources can be especially useful in supporting a number of desired application functions such as inference and influence identification. As a real world application we use climate change publications based on the Intergovernmental Panel on Climate Change, which publishes climate change assessment reports every five years, with currently over 25 years of published content. Often these reports reference thousands of research papers. We use dynamic topic modeling as a basis for combining report and citation domains into one structure. We are able to correlate documents between the two domains to understand how the research has influenced the reports and how this influence has changed over time. In this use case, the topic report model used a total number of 410 documents and 5911 terms in the vocabulary while in the topic citations the vocabulary consisted of 25,154 terms and the number of documents was closer to 200,000 research papers.

Inferring Relations in Knowledge Graphs with Tensor Decomposition

November 6th, 2016, by Tim Finin, posted in GENERAL

kgraph

Ankur Padia, Kostantinos Kalpakis, and Tim Finin, Inferring Relations in Multi-relational Knowledge Graphs with Tensor Decomposition, IEEE BigData, Dec. 2016.

Multi-relational data, like knowledge graphs, are generated from multiple data sources by extracting entities and their relationships. We often want to include inferred, implicit or likely relationships that are not explicitly stated, which can be viewed as link-prediction in a graph. Tensor decomposition models have been shown to produce state-of-the-art results in link-prediction tasks. We describe a simple but novel extension to an existing tensor decomposition model to predict missing links using similarity among tensor slices, as opposed to an existing tensor decomposition models which assumes each slice to contribute equally in predicting links. Our extended model performs better than the original tensor decomposition and the non-negative tensor decomposition variant of it in an evaluation on several datasets.

Forum on Cybersecurity Concerns in Local Governments, Baltimore 4/15

April 3rd, 2016, by Tim Finin, posted in GENERAL

The UMBC School of Public Policy, bwtech@UMBC Cyber Incubator, and UMBC Center for Cybersecurity are sponsoring a form on Cybersecurity Concerns in Local Governments from 8:30-11:00am on Friday, April 15, 2016 at the Columbus Center in Baltimore.

“Like their counterparts in the private sector, it is important for local government officials and managers to understand cybersecurity threats to their websites and information systems and to take actions to prevent cyber attacks. The purpose of this forum is to present research on cybersecurity initiatives in local governments in Maryland, and highlight the public policy implications of these initiatives.”

There is no charge to attend this forum, but registration is required. For questions or more information, contact policyforum@umbc.edu.

8:30 a.m. Coffee, light breakfast and networking

9:00 Welcome and Overview

Cybersecurity Challenges in American Local Government
Donald F. Norris, Professor and Director, UMBC School of Public Policy

Policy-driven Approaches to Security
Anupam Joshi, Professor and Director, UMBC Center for Cybersecurity

Perspectives from Maryland Local Governments
Rob O’Connor, Chief Technology Officer, Baltimore County
Jerome Mullen, Chief Technology Officer, City of Baltimore

10:15 Audience Q & A

11:00 Adjourn

1100-line Perl emulator for BBN-LISP runs original Doctor program

January 6th, 2015, by Tim Finin, posted in GENERAL

Screen Shot

Jeff Shager’s Genealogy of Eliza project has added an 1100-line Perl emulator written by James Markevitch for the 1966 version of BBN-LISP for the PDP-1 computer that can run Bernie Cosell’s original LISP version of doctor.

Markevitch writes in the comments

This is a Perl hack to implement the 1966 version of BBN-LISP for the PDP-1 computer. This was written primarily to run the 1966 LISP version of the “doctor” program (aka Eliza) written by Bernie Cosell. The intent is to be compatible with the version of LISP described in The BBN-LISP System, Daniel G. Bobrow et al, February, 1966, AFCRL-66-180 [BBN66]. However, because many of the quirks of that version of LISP are not documented, The BBN-LISP System Reference Manual April 1969, D. G. Bobrow et al [BBN69] was used as a reference. Finally, LISP 1.5 Programmer’s Manual, John McCarthy et al [LISP1.5] was also used as a reference. N.B. The 1966 version of BBN-LISP has differences from later versions and this interpreter will not properly execute programs written for those later versions.

You can download the Perl Lisp emulator, the doctor lisp code and the script file from the elizagen github repository.

UMBC seeks nine new computing faculty

December 13th, 2014, by Tim Finin, posted in GENERAL

usnews_badge_100

UMBC has a total of nine open full-time positions for computing faculty including five tenure track professors, a professor of the practice and three lecturers.

UMBC’s Computer Science and Electrical Engineering department is seeking to fill five positions for the coming year. They include two tenure track positions in Computer Science, up to three full-time lecturers. See the CSEE jobs page for more information.

The College of Engineering and Information Technology has a position for a full-time lecturer or Professor of Practice to focus on the needs of incoming computing majors through teaching, advising, and helping develop programs in computing. This person will work closely with faculty in the Computer Science and Electrical Engineering Department and Information Systems Department.

UMBC’s Information Systems department is accepting applications for three tenure track faculty positions in data science, software engineering and human-centered computing.

TISA Topic Independence Scoring Algorithm

June 23rd, 2014, by Tim Finin, posted in GENERAL

Justin Martineau, Doreen Cheng and Tim Finin, TISA: topic independence scoring algorithm. In Proc. 9th Int. Conf. on Machine Learning and Data Mining (MLDM’13), pp. 555-570, July 2013, Springer-Verlag.

Textual analysis using machine learning is in high demand for a wide range of applications including recommender systems, business intelligence tools, and electronic personal assistants. Some of these applications need to operate over a wide and unpredictable array of topic areas, but current in-domain, domain adaptation, and multi-domain approaches cannot adequately support this need, due to their low accuracy on topic areas that they are not trained for, slow adaptation speed, or high implementation and maintenance costs.

To create a true domain-independent solution, we introduce the Topic Independence Scoring Algorithm (TISA) and demonstrate how to build a domain-independent bag-of-words model for sentiment analysis. This model is the best preforming sentiment model published on the popular 25 category Amazon product reviews dataset. The model is on average 89.6% accurate as measured on 20 held-out test topic areas. This compares very favorably with the 82.28% average accuracy of the 20 baseline in-domain models. Moreover, the TISA model is highly uniformly accurate, with a variance of 5 percentage points, which provides strong assurance that the model will be just as accurate on new topic areas. Consequently, TISAs models are truly domain independent. In other words, they require no changes or human intervention to accurately classify documents in never before seen topic areas.

Do not be a Gl***hole, use Face-Block.me!

March 27th, 2014, by Prajit Kumar Das, posted in Ebiquity, Google, Mobile Computing, Policy, Semantic Web, Social, Wearable Computing

If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.

FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.

Comfort at the cost of Privacy?

As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.

FaceBlock_Image_Google_Glass

Privacy-Aware pictures to the rescue

FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.

Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.

FaceBlock_Image_Eigen_UncheckFaceBlock_Image_Eigen_CheckFaceBlock_Image_Blur

What promises does it hold?

FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.

FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me

UMBC CSEE Department seeks five new faculty

October 16th, 2013, by Tim Finin, posted in GENERAL

The UMBC Computer Science and Electrical Engineering department is searching for new full-time faculty: two in Computer Science, one in Electrical and Computer Engineering, one Computer Science professor of the practice, and one Computer Science/Information Systems lecturer. See the CSEE Jobs page for detailed information on the positions, preferred specializations and the application process.

data.ac.uk site devoted to linked open data development

March 26th, 2013, by Tim Finin, posted in GENERAL

data

If you are interested in the semantic web and linked data, data.ac.uk looks like a site worth investigating.

“This is a landmark site for academia providing a single point of contact for linked open data development. It not only provides access to the know-how and tools to discuss and create linked data and data aggregation sites, but also enables access to, and the creation of, large aggregated data sets providing powerful and flexible collections of information. Here at Data.ac.uk we’re working to inform national standards and assist in the development of national data aggregation subdomains.”

Three ebiquity student posters at the 2012 GHC

October 5th, 2012, by Tim Finin, posted in Ebiquity, GENERAL

Three Ph.D. students from the ebiquity lab have posters at the ACM Student Research Competition and General Poster Session of the 2012 Grace Hopper Celebration of Women in Computing conference. The GHC conference is the largest technical conference for women in computing and results in collaborative proposals, networking and mentoring for junior women and increased visibility for the contributions of women in computing. Conference presenters are leaders in their respective fields, representing industry, academia and government. Top researchers present their work while special sessions focus on the role of women in today’s technology fields.

The three ebiquity lab students with posters this year are:

Automation of Cloud Services lifecycle by using Semantic technologies,
Karuna Panda Joshi

We have developed a new framework for automating the configuration, negotiation and procurement of services in a cloud computing environment using semantic web technologies.We have developed detailed Ontologies for the framework. We have designed a prototype, called Smart Cloud Services, which is based on this framework and also incorporates NIST’s policies on cloud computing. This prototype is integrated with different cloud platforms like Eucalyptus and VCL.

A Knowledge-Based Approach To Intrusion Detection Modeling,
M. Lisa Mathews

Current state of the art intrusion detection and prevention systems (IDPS) are signature-based systems that detect threats and vulnerabilities by cross-referencing the threat/vulnerability signatures in their databases. These systems are incapable of taking advantage of heterogeneous data sources for analysis of system activities for threat detection. This work presents a situation-aware intrusion detection model that integrates these heterogeneous data sources and builds a semantically rich knowledge-base to detect cyber threats/vulnerabilities.

Unsupervised Coreference Resolution for FOAF Instances,
Jennifer Alexander Sleeman

Coreference Resolution determines when two entity descriptions represent the same real world entity. Friend of a Friend (FOAF) is an ontology about people and their social networks. Currently there is not a way to easily recognize when two FOAF instances represent the same entity. Existing techniques that use supervised learning typically do not support incremental processing. I present an unsupervised approach that supports both heterogeneous data and incremental online processing.

2011 Hype Cycle for Emerging Technologies

August 24th, 2011, by Tim Finin, posted in GENERAL

The hype cycle concept has been used by IT consulting company Gartner since 1995 to highlight the common pattern of “overenthusiasm, disillusionment and eventual realism that accompanies each new technology and innovation.” While Gartner’s hype cycles represent one company’s opinions, the underlying concept seems right and it is always interesting to see where they place the current crop of computing related technologies.

Here is their 2011 hype cycle for emerging technologies


and some comments from the accompanying press release.

“Themes from this year’s Emerging Technologies Hype Cycle include ongoing interest and activity in social media, cloud computing and mobile,” Ms. Fenn said. “On the social media side, social analytics, activity streams and a new entry for group buying are close to the peak, showing that the era of sky-high valuations for Web 2.0 startups is not yet over. Private cloud computing has taken over from more-general cloud computing at the top of the peak, while cloud/Web platforms have fallen toward the Trough of Disillusionment since 2010. Mobile technologies continue to be part of most of our clients’ short- and long-range plans and are present on this Hype Cycle in the form of media tablets, NFC payments, quick response (QR)/color codes, mobile application stores and location-aware applications.

Transformational technologies that will hit the mainstream in less than five years include highly visible areas, such as media tablets and cloud computing, as well as some that are more IT-specific, such as in-memory database management systems, big data, and extreme information processing and management. In the long term, beyond the five-year horizon, 3D printing, context-enriched services, the “Internet of Things” (called the “real-world Web” in earlier Gartner research), Internet TV and natural language question answering will be major technology forces. Looking more than 10 years out, 3D bioprinting, human augmentation, mobile robots and quantum computing will also drive transformational change in the potential of IT.”

You can get a copy of the Hype Cycle for Emerging Technologies Summary Report by giving your contact information, but the full report on this or any of the other 26 topical hype cycle reports will cost you money.

You are currently browsing the archives for the GENERAL category.

  Home | Archive | Login | Feed