Archive for the 'Machine Learning' Category
January 18th, 2014, by Tim Finin, posted in Big data, Datamining, Machine Learning, Semantic Web
A free PDF version of the new second edition of Mining of Massive Datasets by Anand Rajaraman, Jure Leskovec and Jeffey Ullman is available. New chapters on mining large graphs, dimensionality reduction, and machine learning have been added. Related material from Professor Leskovec’s recent Stanford course on Mining Massive Data Sets is also available.
December 4th, 2013, by Tim Finin, posted in Google, Machine Learning, NLP
A post on Google’s research blog lists the major datasets for NLP and KB processing that Google has released in the past year. They include datasets to help in entity linking, relation extraction, concept spotting and syntactic analysis. Subscribe to the the Knowledge Data Releases mailing list for updates.
August 2nd, 2013, by Tim Finin, posted in AI, Machine Learning, NLP
The third Mid-Atlantic Student Colloquium on Speech, Language and Learning will be held at UMBC on Fri. 11 Oct 3013, bringing together students, postdocs, faculty and researchers from universities in the Mid-Atlantic area doing research on speech, language or machine learning. It is an opportunity for students and postdocs to present preliminary, ongoing or completed work and to network with other researchers working in related fields.
The first MASC-SLL was held in 2011 at Johns Hopkins University and the second in 2012 at the University of Maryland, College Park. This year the event will be held at the University of Maryland, Baltimore County (UMBC) in Baltimore, MD from 9:30 to 5:00 on Friday, 11 October 2013. There will be no registration charge and lunch and refreshments will be provided.
Students and postdocs are encouraged to submit abstracts describing ongoing, planned, or completed research projects, including previously published results and negative results. Research in any field applying computational methods to any aspect of human language, including speech and learning, from all areas of computer science, linguistics, engineering, neuroscience, information science, and related fields, is welcome. All accepted submissions will be presented as posters and some will also be invited for short oral presentations. Student-led breakout sessions will also be held to discuss papers or topics of interest and stimulate interaction and discussion. Suggest breakout session topics via easychair.
May 1st, 2013, by Tim Finin, posted in Machine Learning, NLP, Semantic Web
The UMBC WebBase corpus is a dataset of high quality English paragraphs containing over three billion words derived from the Stanford WebBase project’s February 2007 Web crawl. Compressed, its size is about 13GB. We have found it useful for building statistical language models that characterize English text found on the Web.
The February 2007 Stanford WebBase crawl is one of their largest collections and contains 100 million web pages from more than 50,000 websites. The Stanford WebBase project did an excellent job in extracting textual content from HTML tags but there are still many instances of text duplications, truncated texts, non-English texts and strange characters.
We processed the collection to remove undesired sections and produce high quality English paragraphs. We detected paragraphs using heuristic rules and only retrained those whose length was at least two hundred characters. We eliminated non-English text by checking the first twenty words of a paragraph to see if they were valid English words. We used the percentage of punctuation characters in a paragraph as a simple check for typical text. We removed duplicate paragraphs using a hash table. The result is a corpus with approximately three billion words of good quality English.
The corpus is available as a 13G compressed tar file which is about 48G when uncompressed. It contains 408 files with paragraphs extracted from web pages, one to a line with blank lines between them. A second set of 408 files have the same paragraphs, but with the words tagged with their part of speech (e.g., The_DT Option_NN draws_VBZ on_IN modules_NNS from_IN all_PDT the_DT).
The dataset has been used in several projects. If you use the dataset, please refer to it by citing the following paper, which describes it and its use in a system that measures the semantic similarity of short text sequences.
Download the corpus from here.
March 25th, 2013, by Tim Finin, posted in Machine Learning, NLP, Semantic Web
The results of the 2013 Semantic Textual Similarity task (STS) are out. We were happy to find that our system did very well on the core task, placing first out of the 35 participating teams. The three runs we submitted were ranked first, second and third in the overall summary score.
Congratulations are in order for Lushan Han and Abhay Kashyap, the two UMBC doctoral students whose research and hard work produced a very effective system.
The STS task
The STS core task is to take two sentences and to return a score between 0 and 5 representing how similar the sentences are, with a larger number meaning a higher similarity. Compared with word similarity, the definition of sentence similarity tends to be more difficult and different people may have different views.
The STS task provides a reasonable and interesting definition. More importantly, the Pearson correlation scores are about 0.90  for human raters using Amazon Mechanical Turk on the 2012 STS gold standard datasets, almost same to inter-rater agreement level, 0.9026 , on the well-known Miller-Charles word similarity dataset. This shows that human raters largely agree on the definitions used in the scale.
- 5: The sentences are completely equivalent, as they mean the same thing, e.g., “The bird is bathing in the sink” and “Birdie is washing itself in the water basin”.
- 4: The sentences are mostly equivalent, but some unimportant details differ, e.g., “In May 2010, the troops attempted to invade Kabul” and “The US army invaded Kabul on May 7th last year, 2010”.
- 3: The sentences are roughly equivalent, but some important information differs/missing, e.g., “John said he is considered a witness but not a suspect.” and “‘He is not a suspect anymore.’ John said.”
- 2: The sentences are not equivalent, but share some details, e.g., “They flew out of the nest in groups” and “They flew into the nest together”.
- 1: The sentences are not equivalent, but are on the same topic, e.g., “The woman is playing the violin” and “The young lady enjoys listening to the guitar”.
- 0: The sentences are on different topics, e.g., “John went horse back riding at dawn with a whole group of friends” and “Sunrise at dawn is a magnificent view to take in if you wake up early enough for it”.
The STS datasets
There were 86 runs submitted from more than 35 teams. Each team could submit up to three runs over sentence pairs drawn from four datasets, which included the following.
- Headlines (750 pairs): a collection of pairs of headlines mined from several news sources by European Media Monitor using the RSS feed, e.g., “Syrian rebels move command from Turkey to Syria” and “Free Syrian Army moves headquarters from Turkey to Syria”.
- SMT (750 pairs): a collection with sentence pairs the DARPA GALE program, where one sentence is the output of a machine translation system and the other is a reference translation provided by a human, for example, “The statement, which appeared on a website used by Islamists, said that Al-Qaeda fighters in Islamic Maghreb had attacked three army centers in the town of Yakouren in Tizi-Ouzo” and the sentence “the pronouncement released that the mujaheddin of al qaeda in islamic maghreb countries attacked 3 stations of the apostates in city of aekorn in tizi ouzou , which was posted upon the web page used by islamists”.
- OnWN (561 pairs): a collection of sentence pairs describing word senses, one from OntoNotes and another from WordNet, e.g., “the act of advocating or promoting something” and “the act of choosing or selecting”.
- FNWN (189 pairs): a collection of pairs of sentences describing word senses, one from FrameNet and another from WordNet, for example: “there exist a number of different possible events that may happen in the future. in most cases, there is an agent involved who has to consider which of the possible events will or should occur. a salient_entity which is deeply involved in the event may also be mentioned” and “doing as one pleases or chooses;”.
Our three systems
We used a different system for each of our allowed runs, PairingWords, Galactus and Saiyan. While they shared a lot of the same infrastructure, each used a different mix of ideas and features.
- ParingWords was built using hybrid word similarity features derived from LSA and WordNet. It used a simple algorithm to pair words/phrases in two sentences and compute the average of word similarity of the resulting pairs. It imposes penalties on words that are not matched with the words weighted by their PoS and log frequency. No training data is used. An online demonstration system is available to experiment with the underlying word similarity model used by this approach.
- Galactus used unigrams, bigrams, trigrams and skip bigrams derived from the two sentences and paired them with the highest similarity based on exact string match, corpus and Wordnet based similarity metrics. These, along with contrast scores derived from antonym pairs, were used as features to train a support vector regression model to predict the similarity scores.
- Saiyan was a fine tuned version of galactus which used domain specific features and training data to train a support vector regression model to predict the similarity scores. (Scores for FNWN was directly used from the PairingWords run.)
Here’s how our three runs ranked (out of 86) on each of the four different data sets and on the overall task (mean).
||our three systems
Over the next two weeks we will write a short system paper for the *SEM 2013, the Second Joint Conference on Lexical and Computational Semantics.
 Eneko Agirre, Daniel Cer, Mona Diab and Gonzalez-Agirre Aitor. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proc. 6th Int. Workshop on Semantic Evaluation (SemEval 2012), in conjunction with the First Joint Conf. on Lexical and Computational Semantics (*SEM 2012)., Montreal,Canada.
 P. Resnik, “Using information content to evaluate semantic similarity in a taxonomy,” in Proc. 14th Int. Joint Conf. on Artificial Intelligence, 1995.
March 8th, 2013, by Tim Finin, posted in Google, Machine Learning, NLP, Semantic Web
Google released the Wikilinks Corpus, a collection of 40M disambiguated mentions from 10M web pages to 3M Wikipedia pages. This data can be used to train systems that do entity linking and cross-document co-reference, problems that Google researchers attacked with an earlier version of this data (see Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models).
You can download the data as ten 175MB files from and some addional tools from UMASS.
This is yet another example of the important role that Wikipedia continues to play in building a common, machine useable semantic substrate for human conceptualizations.
March 7th, 2013, by Tim Finin, posted in AI, Google, Machine Learning, Semantic Web
Google Sets was a the result of a early Google research project that ended in 2011. The idea was to be able to recognize the similarity of a set of terms (e.g., python, lisp and fortran) and automatically identify other similar terms (e.g., c, java, php). Suprisingly (to me) the results of the project live on as an undocumented feature in Google Doc spreadsheets. Try putting a few of the seven deadly sins into a Google spreadsheet and use the feature to see what else you should not do (e.g., creating fire with alchemy, I guess).
Google, of course, continues to work on expanding their use of semantic information, currently through efforts like the Google Knowledge Graph, Freebase, Microdata and Fusion Tables. Other companies, including Mcrosoft, IBM and a host of startups, are also hard at work on similar projects.
February 3rd, 2013, by Tim Finin, posted in Machine Learning, Semantic Web
The popular KDnuggets news site for analytics, data mining and data science asked their visitors “What will replace “Big Data” as a hot buzzword?” and the most popular choice was “smart data”. I’m not sure what people meant by that, but can only imagine that it includes data that have explicit or implicit semantics and are annotated with metadata like temporal qualifications, provenance and certainty factors. These are all capabilities of current semantic web technologies, especially when coupled with machine learning for additional inference.
(h/t Kingsley Idehen)
January 10th, 2013, by Tim Finin, posted in AI, Machine Learning, NLP, Semantic Web
Computing semantic similarity between words and phrases has important applications in natural language processing, information retrieval, and artificial intelligence. There are two prevailing approaches to computing word similarity, based on either using of a thesaurus (e.g., WordNet) or statistics from a large corpus. We provide a hybrid approach combining the two methods that is demonstrated on a web site through two services: one that returns a similarity score for two words or phrases and another that takes a word and shows a ranked list of the most similar words.
Our statistical method is based on distributional similarity and Latent Semantic Analysis. We further complement it with semantic relations extracted from WordNet. The whole process is automatic and can be trained using different corpora. We assume the semantics of a phrase is compositional on its component words and apply an algorithm to compute similarity between two phrases using word similarity.
The algorithms, implementation and data for this work were developed by Lushan Han as part of his research on developing easier ways to query linked open data collections. It was supported by grants from AFOSR (FA9550-08-1-0265), NSF (IIS-1250627) and a give from Microsoft. Contact umbcsim at cs.umbc.edu for more information.
December 13th, 2012, by Tim Finin, posted in cloud computing, High performance computing, Machine Learning
The Center for Hybrid Multicore Productivity Research is a collaborative research center sponsored by the National Science Foundation with two university partners (UMBC and University of California San Diego), six government, and seven industry members. The Center's research is focused on addressing productivity, performance, and scalability issues in meeting the insatiable computational demands of its members' applications through the continuous evolution of multicore architectures and open source tools.
As part of its annual industrial advisory board meeting next week, the center will hold an afternoon of public tutorials from 1:00pm to 4:00pm on Monday, 17 December 2012 in room 456 of the ITE building at UMBC. The tutorials will be presented by students doing research sponsored by the Center and feature some of the underlying technologies being used and some of their applications. The tutorials are:
- GPGPUs – Tim Blattner and Fahad Zafa
- Cloud Policies – Karuna Joshi
- Human Sensors Networks – Oleg Aulov
- Machine Learning Disaster Warnings – Han Dong
- Graph 500 – Tyler Simon
- HBase – Phuong Nyguen
The tutorial talks are free and open to the public. If you plan to attend, please RSVP by email to Dr. Valerie L. Thomas, email@example.com.
September 22nd, 2011, by Tim Finin, posted in AI, Machine Learning
Genetic information for chronic disease prediction
Michael A. Grasso, MD, PhD
University of Maryland School of Medicine
1:00pm Friday 23 September 2011, 227 ITE
Type 2 diabetes and coronary artery disease are commonly occurring polygenic-multifactorial diseases, which are responsible for significant morbidity and mortality. The identification of people at risk for these conditions has historically been based on clinical factors alone. However, this resulted in prediction algorithms that are linked to symptomatic states, which have limited accuracy in asymptomatic individuals. Advances in genetics have raised the hope that genetic testing may aid in disease prediction, treatment, and prevention. Although intuitive, the addition of genetic information to increase the accuracy of disease prediction remains an unproven hypothesis. We present an overview of genetic issues involved in polygenic-multifactorial diseases, and summarize ongoing efforts use this information for disease prediction.
Michael Grasso is an Assistant Professor of Internal Medicine and Emergency Medicine at the University of Maryland School of Medicine, and an Assistant Research Professor of Computer Science at the University of Maryland Baltimore County. He earned a medical degree from the George Washington University and a PhD in Computer Science from the University of Maryland. He is a member of the Upsilon Pi Epsilon Honor Society in the Computing Sciences, the Kane-King-Dodec Medical Honor Society, and the William Beaumont Medical Research Honor Society. He completed a residency at the University of Maryland School of Medicine, and currently works in the Department of Emergency Medicine at the University of Maryland Medical Center. He has been awarded more than $1,200,000 in grant funding from the National Institutes of Health, the National Bureau of Standards and Technology, and the Department of Defense, and has authored more than 35 scholarly papers and abstracts. His research interests include clinical decision support systems, clinical data mining, clinical image processing, personalized medicine, software engineering, database engineering, and human factors. He is also a semi-professional trumpet player and is interested in the specific medical needs of performing artists, especially instrumental musicians.
Host: Yelena Yesha
September 11th, 2011, by Tim Finin, posted in Machine Learning, Semantic Web, Social media
Many Google+ users have been reporting frequent notices about new followers that they don’t know and appear to be attractive young women. The suspicious followers have minimal profiles and no posts. These are obviously false accounts being created for some yet unknown purpose, but how can one prove it?
I just got a notice, for example, that Janet Smith of Philadelphia is following me. Now Janet Smith is a common name and Philadelphia is a big place — there are probably hundreds of people who live in the Philadelphia area with that name. The 990 other people she’s following seem like a pretty random bunch, though I do know many and have more than a few in my own circles. Most seem to have a fair number of followers.
So there is not much to go on other than her profile image. This is a great use for Google’s new image search. I dragged the picture into the image search query field and Google identified its best guess for the image as Indian actress Koyel Mullick. Sure enough, if you search for images with her name, the precise Janet Smith image is result number 15.
Of course, there are still some subtle issues. This is just one kind of false profile — one created for one identity but using an image from a different one. It’s common on most social media systems, including G+, for some people to use a picture of someone or something other than themselves. But it’s obvious to a human viewer that using a picture of a rabbit, Marilyn Monroe or the mighty Thor on your profile is not meant to deceive. It will be challenging to automate the process of discriminating the intent to deceive from modesty, homage or an ironic gesture.
You are currently browsing the archives for the Machine Learning category.