UMBC ebiquity
Computing Research

Archive for the 'Computing Research' Category

NIST Big Data Workshop, 13-14 June 2012

May 31st, 2012, by Tim Finin, posted in cloud computing, Conferences

NIST will hold a Big Data Workshop 13-14 June 2012 in Gaithersburg to explore key national priority topics in support of the White House Big Data Initiative. The workshop is being held in collaboration with the NSF sponsored Center for Hybrid Multicore Productivity Research, a collaboration between UMBC, Georgia Tech and UCSD.

This first workshop will discuss examples from science, health, disaster management, security, and finance as well as topics in emerging technology areas, including analytics and architectures. Two issues of special interest are identifying the core technologies needed to collect, store, preserve, manage, analyze, and share big data that could be standardized and developing measurements to ensure the accuracy and robustness of big data methods.

The workshop format will be a mixture of sessions, panels, and posters. Session speakers and panel members are by invitation only but all interested parties are encouraged to submit extended abstracts and/or posters.

The workshop is being held at NIST’s Gaithersburg facility and is free, although online pre-registration is required. A preliminary agenda is available which is subject to change as the workshop date approaches.

Mid-Atlantic student colloquium on speech, language and learning

September 2nd, 2011, by Tim Finin, posted in AI, Conferences, KR, Machine Learning, NLP

The First Mid-Atlantic Student Colloquium on Speech, Language and Learning is a one-day event to be held at the Johns Hopkins University in Baltimore on Friday, 23 September 2011. Its goal is to bring together students taking computational approaches to speech, language, and learning, so that they can introduce their research to the local student community, give and receive feedback, and engage each other in collaborative discussion. Attendance is open to all and free but space is limited, so online registration is requested by September 16. The program runs from 10:00am to 5:00pm and will include oral presentations, poster sessions, and breakout sessions.

Computer Science publication culture

February 14th, 2011, by Tim Finin, posted in Computing Research, CS, Semantic Web

There has been an ongoing discussion on the publication culture with the computer science research community in CACM, carried out through a series of editorials, opinion pieces, articles and letters. It covers the usual topics — the best role of workshops, conferences and journals, reviewer responsibility, the effect of deadlines on publications, etc. All important issues.

Jonathan Grudin has an opinion piece in the current (Feb) CACM

Technology, conferences, and community. J. Grudin, 2011. Comm. of the ACM, 54, 2, 41-43.

He has also made available a list of the 16 recent CACM articles (with links) on the topic. It’s a list of papers worth reading.

First Baltimore Hackathon, 19-21 Nov 2010

November 3rd, 2010, by Tim Finin, posted in Conferences, GENERAL, Technology

The First Baltimore Hackathon will take place on Friday and Saturday, November 19-20, 2010 at Beehive Baltimore, 2400 Boston St, on the 3rd floor of the Emerging Technology Center.

Come to build a hardware or software project — from idea to prototype — in a weekend either individually or as part of a team! While you are hacking, you’ll enjoy free food and coffee and be eligible to win prizes and awards! If you are interested, sign up and use the Baltimore Hackathon wiki to share ideas and build a team or to list yourself as available to join an existing team.

Check out the TechinBaltimore Google group for more information and discussion about the hackathon and related technology events in and around Baltimore.

CS conference selectivity and impact

May 29th, 2010, by Tim Finin, posted in Conferences, CS

The June 2010 CACM has an interesting article by Jilin Chen and Joseph Konstan of the University of Minnesota on Conference Paper Selectivity and Impact. The abstract gets right to the point:

“Studying the metadata of the ACM Digital Library (http://www.acm.org/dl), we found that papers in low-acceptance-rate conferences have higher impact than those in high-acceptance-rate conferences within ACM, where impact is measured by the number of citations received. We also found that highly selective conferences — those that accept 30% or less of submissions—are cited at a rate comparable to or greater than ACM Transactions and journals.”

A key paragraph later in the paper has some more detail:

“Addressing the second question— on how much impact conference papers have compared to journal papers — in Figures 3 and 4, we found that overall, journals did not outperform conferences in terms of citation count; they were, in fact, similar to conferences with acceptance rates around 30%, far behind conferences with acceptance rates below 25% (T-test, T[7603] = 24.8, p< .001). Similarly, journals published as many papers receiving no citations in the next two years as conferences accepting 35%–40% of submissions, a much higher low-impact percentage than for highly selective conferences. The same analyses over four- and eight-year periods yielded results consistent with the two-year period; journal papers received significantly fewer citations than conferences where the acceptance rate was below 25%."


Impact of CS conferences vs. journals

Impact of CS conferences vs. journals



We have to assume that this study is only applicable to Computer Science, for which the ACM digital library is a very good sample, and not other disciplines (e.g., EE) or even narrow sub-disciplines within CS. Different disciplines have very different publication patterns. But it does confirm our own anecdotal evidence from tracking citations to papers written in our ebiquity lab over the past ten years — those published din top conferences tend to get more citations than those in journals.

CFP: Semantics for the rest of us Workshop at 8th Int. Semantic Web Conference

July 9th, 2009, by Tim Finin, posted in Conferences, iswc, OWL, RDF, Semantic Web, Web
IMPORTANT DATES
Submissions 10 Aug 09
Notification 19 Aug 09
Final copy 2 Sept 09
Workshop 26 Oct 09

Semantics for the Rest of Us: Variants of Semantic Web Languages in the Real World is a workshop that will be held at the on 26 October 2009 in Washington, DC.

The Semantic Web is a broad vision of the future of personal computing, emphasizing the use of sophisticated knowledge representation as the basis for end-user applications’ data modeling and management needs. Key to the pervasive adoption of Semantic Web technologies is a good set of fundamental “building blocks” – the most important of these are representation languages themselves. W3C’s standard languages for the Semantic Web, RDF and OWL, have been around for several years. Instead of strict standards compliance, we see “variants” of these languages emerge in applications, often tailored to a particular application’s needs. These variants are often either subsets of OWL or supersets of RDF, typically with fragments OWL added. Extensions based on rules, such as SWRL and N3 logic, have been developed as well as enhancements to the SPARQL query language and protocol.

This workshop will explore the landscape of RDF, OWL and SPARQL variants, specifically from the standpoint of “real-world semantics”. Are there commonalities in these variants that might suggest new standards or new versions of the existing standards? We hope to identify common requirements of applications consuming Semantic Web data and understand the pros and cons of a strictly formal approach to modeling data versus a “scruffier” approach where semantics are based on application requirements and implementation restrictions.

The workshop will encourage active audience participation and discussion and will include a keynote speaker as well as a panel. Topics of interest include but are not limited to

  • Real world applications that use (variants of) RDF, OWL, and SPARQL
  • Use cases for different subsets/supersets of RDF, OWL, and SPARQL
  • Extensions of SWRL and N3Logic
  • RIF dialects
  • How well do the current SW standards meet system requirements ?
  • Real world “semantic” applications using other structured representations (XML, JSON)
  • Alternatives to RDF, OWL or SPARQL
  • Are ad hoc subsets of SW languages leading to problems?
  • What level of expressive power does the Semantic Web need?
  • Does the Semantic Web require languages based on formal methods?
  • How should standard Semantic Web languages be designed?

We seek two kinds of submissions: full papers up to ten pages long and position papers up to five pages long. Format papers according the ISWC 2009 instructions. Accepted papers will be presented at the workshop and be part of the workshop proceedings.

Organizers:

MIT adopts universal open access policy

March 19th, 2009, by Tim Finin, posted in Computing Research, GENERAL

Yesterday the MIT faculty approved a university-wide open access policy. The full txt of the resolution, which passed unanimously, i available on Peter Suber’s Open Access News blog. Here’s an excerpt.

“Each Faculty member grants to the Massachusetts Institute of Technology nonexclusive permission to make available his or her scholarly articles and to exercise the copyright in those articles for the purpose of open dissemination. In legal terms, each Faculty member grants to MIT a nonexclusive, irrevocable, paid-up, worldwide license to exercise any and all rights under copyright relating to each of his or her scholarly articles, in any medium, provided that the articles are not sold for a profit, and to authorize others to do the same. The policy will apply to all scholarly articles written while the person is a member of the Faculty except for any articles completed before the adoption of this policy and any articles for which the Faculty member entered into an incompatible licensing or assignment agreement before the adoption of this policy. … The Provost’s Office will make the scholarly article available to the public in an open- access repository. The Office of the Provost, in consultation with the Faculty Committee on the Library System will be responsible for interpreting this policy, resolving disputes concerning its interpretation and application, and recommending changes to the Faculty.

I have to say I am conflicted about this and wish I was more informed. As a researcher, I am 100% for the right to make papers describing our results freely available. But I also recognize that publishers and professional societies are an essential part of our research infrastructure and their business models are partially built on copyright and controlling access to content.

Just as we are seeing the big changes in main stream media, we will probably see related changes in publishers, including professional societies. We’ll have to wait and see if they represent a phase shift to a new and better model or simply the collapse of the old one.

The analogy between the two is far from perfect. Traditional MSM publishers pay a professional staff to research, write and edit stories. Journal publishers and professional societies don’t typically pay their authors who increasingly deliver camera ready copy or near camera-ready electronic copy.

NSF and science increments survive stimulus conference

February 12th, 2009, by Tim Finin, posted in Computing Research, Funding

Stimulus funding for research and science has done well in the version of the American Economic Recovery and Reinvestment Act coming out of conference. The conference report overview identifies a category that will:

“Transform our Economy with Science and Technology: To secure America’s role as a world leader in a competitive global economy, we are renewing America’s investments in basic research and development, in training students for an innovation economy, and in deploying new technologies into the marketplace. This will help businesses in every community succeed in a global economy.”

The CRA policy blog has the details in House Numbers for Science Prevail in Stimulus Conference. Highlights of the $15B+ to be invested in scientific research include:

  • Provides $3 billion for the National Science Foundation, for basic research in fundamental science and engineering – which spurs discovery and innovation.
  • Provides $1.6 billion for the Department of Energy’s Office of Science, which funds research in such areas as climate science, biofuels, high-energy physics, nuclear physics and fusion energy sciences – areas crucial to our energy future.
  • Provides $400 million for the Advanced Research Project Agency-Energy (ARPA-E) to support high-risk, high-payoff research into energy sources and energy efficiency in collaboration with industry.
  • Provides $580 million for the National Institute of Standards and Technology, including the Technology Innovation Program and the Manufacturing Extension Partnership.
  • Provides $8.5 billion for NIH, including expanding good jobs in biomedical research to study diseases such as Alzheimer’s, Parkinson’s, cancer, and heart disease.
  • Provides $1 billion for NASA, including $400 million to put more scientists to work doing climate change research.
  • Provides $1.5 billion for NIH to renovate university research facilities and help them compete for biomedical research grants.

US House stimulus plan: NSF += $3B

January 15th, 2009, by Tim Finin, posted in Computing Research, Funding

The CRA reports that the US science and technology research community may get it’s own little bailout. The House Appropriations Committee released details of their American Recovery and Reinvestment economic stimulus package that includes funds for scientific research.

NSF is slated to get $3B in new money:

“including $2 billion for expanding employment opportunities in fundamental science and engineering to meet environmental challenges and to improve global economic competitiveness, $400 million to build major research facilities that perform cutting edge science, $300 million for major research equipment shared by institutions of higher education and other scientists, $200 million to repair and modernize science and engineering research facilities at the nation’s institutions of higher education and other science labs, and $100 million is also included to improve instruction in science, math and engineering”

The plan also calls for new research money for NIH, DOE, NASA, NIST and other government organizations as well as $6B for broadband deployment.

While this is not large as bailouts go, we must keep in mind it was done without a crisis brought about by the rampant use of research breakthrough default swap instruments or scholarly paper citation pyramid schemes. Maybe we should have gotten MBAs.

Update 1/16: The CRA policy blog has some more details on how the funds will be allocated within some of the agencies.

Eigenfactor.org measures and visualizes journal impact

December 19th, 2008, by Tim Finin, posted in Computing Research, Semantic Web, Social media

eigenfactor.org is a fascinating site that is exploring new ways to measure and visualize the importance or journals to scientific communities. The site is a result of work by the Bergstrom lab in the Department of Biology at the University of Washington. The project defines two metrics for scientific journals based on a page-rank like algorithm applied to citation graphs.

“A journal’s Eigenfactor score is our measure of the journal’s total importance to the scientific community. With all else equal, a journal’s Eigenfactor score doubles when it doubles in size. Thus a very large journal such as the Journal of Biological Chemistry which publishes more than 6,000 articles annually, will have extremely high Eigenfactor scores simply based upon its size. Eigenfactor scores are scaled so that the sum of the Eigenfactor scores of all journals listed in Thomson’s Journal Citation Reports (JCR) is 100.

A journal’s Article Influence score is a measure of the average influence of each of its articles over the first five years after publication. Article Influence measures the average influence, per article, of the papers in a journal. As such, it is comparable to Thomson Scientific’s widely-used Impact Factor. Article Influence scores are normalized so that the mean article in the entire Thomson Journal Citation Reports (JCR) database has an article influence of 1.00.”

For example, here are the ISI-indexed journals in the AI subject category ranked by the Article Influence score for 2006.

The site makes good use of GoogleDoc’s motion charts to visualize the changes of metrics for top journals in a subject area. You can also interactively explore maps that show the influence of different subject categories on one another as estimated from journal citations.

Map of Science

The details of the approach and algorithms are available in various papers by Bergstrom and his colleagues, such as

M. Rosvall and C. T. Bergstrom, Maps of random walks on complex networks reveal community structure, Proceedings of the National Academy of Sciences USA. 105:1118-1123. Also arXiv physics.soc-ph/0707.0609v3 [PDF]

(spotted on Steve Hsu’s blog)

Database researchers identify hot research topics

August 25th, 2008, by Tim Finin, posted in Computing Research, Database, Ontologies, Semantic Web, Social media

Databases are a fundamental technology for most information systems and especially those based on the web. A group of senior database researchers met recently to assess the state of database research, as documented in site. So, where did the Semantic Web fit into their vision?

“In late May, 2008, a group of database researchers, architects, users and pundits met at the Claremont Resort in Berkeley, California to discuss the state of the research field and its impacts on practice. This was the seventh meeting of this sort in twenty years, and was distinguished by a broad consensus that we are at a turning point in the history of the field, due both to an explosion of data and usage scenarios, and to major shifts in computing hardware and platforms. Given these forces, we are at a time of opportunity for research impact, with an unusually large potential for influential results across computing, the sciences and society. This report details that discussion, and highlights the group’s consensus view of new focus areas, including new database engine architectures, declarative programming languages, the interplay of structured and unstructured data, cloud data services, and mobile and virtual worlds.”

On the site you can read the post-meeting report, view the participants presentations on DB research directions and talks and discuss the report on a Google group.

It’s a good report with lots of interesting things in it and definitely worth reading, but I was disappointed to find that it makes no mention of the Semantic Web, RDF, OWL, ontologies, AI, knowledge bases, or reasoning. Here’s a word cloud (generated with wordle) generated from the report, which provides a 10,000 foot view of it’s content.


word cloud generated from The Claremont Database Research Self-Assessment Meeting report

The reports says that it was “surprisingly easy for the group to reach consensus on a set of research topics to highlight for investigation in coming years”. Those topics are:

  • Revisiting Database Engines
  • Declarative Programming for Emerging Platforms
  • The Interplay of Structured and Unstructured Data
  • Cloud Data Services
  • Mobile Applications and Virtual Worlds

There is clearly overlap between the database and semantic web communities in the first three topics.

More cuts to DARPA budget

July 23rd, 2008, by Tim Finin, posted in Computing Research, Funding

Wired reports more cuts to DARPA’s budget in Pentagon Slices and Dices DARPA Budget.

“The Pentagon’s storied research and development arm turned 50 years old this year, and its birthday present appears to be another $100 million in budget cuts, according to a Defense Department document provided to DANGER ROOM. The Defense Advanced Research Projects Agency (DARPA) is having a tumultuous financial year: in June, DARPA faced a $32 million cut because it was “underexecuting”, leading the agency’s director, Tony Tether, to strike back by saying the Pentagon’s “comptroller apparently does not believe in accountability.”
   Whether those comments sparked an all-out comptroller-DARPA war is open for speculation, but the latest “reprogramming,” signed on July 11, may speak for itself. The document includes a number of Pentagon-wide cash transfers, but it hits DARPA particularly hard. Cognitive computing systems, which has previously been hit by congressional cuts, will lose another $13 million, while Network Centric Technology is sliced by $19 million. Another $18 million is being diced from biological warfare defense, and a big cut is taken out of DARPA’s Electronics Technology program, which loses $26 million. The cuts also indicate that DARPA’s high power fiber laser program has apparently been canceled.”

To put a $100M cut in context, the yearly DARPA budgets have been over $3B recently. Still, many of these cuts will be painful within specific R&D communities.

You are currently browsing the archives for the Computing Research category.

  Home | Archive | Login | Feed