UMBC ebiquity
2014 February

Archive for February, 2014

Google MOOC: Making Sense of Data

February 26th, 2014, by Tim Finin, posted in Big data, Google

Google is offering a free, online MOOC style course on ‘Making Sense of Data‘ from March 18 to April 4 taught by Amit Deutsch (Google) and Joe Hellerstein (Berkeley).

Interestingly, it doesn’t require programming or database skills: “Basic familiarity with spreadsheets and comfort using a web browser is recommended. Knowledge of statistics and experience with programming are not required.” The course will use Google’s Fusion Tables service for managing and visualizing data

Stardog unleashed: MD Semantic Web Meeup, 6pm Thr 2/27

February 26th, 2014, by Tim Finin, posted in Semantic Web

The next Central MD Semantic Web Meetup will be held at 6:00pm on Thursday, February 27, 2014 at Inovex Information Systems (7240 Parkway Dr., Suite 140, Hanover MD). Michael Grove, the Chief Software Architect at Clark & Parsia, will talk on their Stardog triple store technology. The meetup is a good way to meet and network with others working on or with semantic technologies in Maryland.

“Stardog Unleashed will provide some background on the motivation for building Stardog, as well a short review of its history and unique feature set We will also provide an overview and demo of Stardog Web; a Javascript framework for building web applications backed by semantic technologies.

Our speaker, Michael Grove, is the Chief Software Architect at Clark & Parsia, where he also serves as the lead developer of Stardog, the leader in RDF databases featuring fast query performance and unmatched OWL & SWRL support.

A graduate in Computer Science at the University of Maryland, College Park, Michael first got started with semantic technologies in 2002 as a research assistant under Dr. Jim Hendler at the University of Maryland with the MINDSWAP group. Before joining the team at Clark & Parsia, he worked at Fujitsu Research Labs as the lead developer for the Task Computing project, an effort bring the semantic web to pervasive computing environments.

Michael is also active in open source where he is a contributor to Pellet the leading OWL DL reasoner and maintains Empire, an implementation of JPA backed by semantic technologies. Additionally, he is contributor to the Sesame project and active on the Jena development list.”

Tracking Provenance and Reproducibility of Big Data Experiments

February 8th, 2014, by Tim Finin, posted in Big data, High performance computing, Ontologies, Semantic Web

In the first Ebiquity meeting of the semester, Vlad Korolev will talk about his work on using RDF for to capture, represent and use provenance information for big data experiments.

PROB: A tool for Tracking Provenance and Reproducibility of Big Data Experiments

10-11:30am, ITE346, UMBC

Reproducibility of computations and data provenance are very important goals to achieve in order to improve the quality of one’s research. Unfortunately, despite some efforts made in the past, it is still very hard to reproduce computational experiments with high degree of certainty. The Big Data phenomenon in recent years makes this goal even harder to achieve. In this work, we propose a tool that aids researchers to improve reproducibility of their experiments through automated keeping of provenance records.

You are currently browsing the UMBC ebiquity weblog archives for February, 2014.

  Home | Archive | Login | Feed