April 11th, 2018
2018 Mid-Atlantic Student Colloquium on Speech, Language and Learning
The 2018 Mid-Atlantic Student Colloquium on Speech, Language and Learning (MASC-SLL) is a student-run, one-day event on speech, language & machine learning research to be held at the University of Maryland, Baltimore County (UMBC) from 10:00am to 6:00pm on Saturday May 12. There is no registration charge and lunch and refreshments will be provided. Students, postdocs, faculty and researchers from universities & industry are invited to participate and network with other researchers working in related fields.
Students and postdocs are encouraged to submit abstracts describing ongoing, planned, or completed research projects, including previously published results and negative results. Research in any field applying computational methods to any aspect of human language, including speech and learning, from all areas of computer science, linguistics, engineering, neuroscience, information science, and related fields is welcome. Submissions and presentations must be made by students or postdocs. Accepted submissions will be presented as either posters or talks.
Important Dates are:
- Submission deadline (abstracts): April 16
- Decisions announced: April 21
- Registration opens: April 10
- Registration closes: May 6
- Colloquium: May 12
May 31st, 2012
NIST will hold a Big Data Workshop 13-14 June 2012 in Gaithersburg to explore key national priority topics in support of the White House Big Data Initiative. The workshop is being held in collaboration with the NSF sponsored Center for Hybrid Multicore Productivity Research, a collaboration between UMBC, Georgia Tech and UCSD.
This first workshop will discuss examples from science, health, disaster management, security, and finance as well as topics in emerging technology areas, including analytics and architectures. Two issues of special interest are identifying the core technologies needed to collect, store, preserve, manage, analyze, and share big data that could be standardized and developing measurements to ensure the accuracy and robustness of big data methods.
The workshop format will be a mixture of sessions, panels, and posters. Session speakers and panel members are by invitation only but all interested parties are encouraged to submit extended abstracts and/or posters.
The workshop is being held at NIST’s Gaithersburg facility and is free, although online pre-registration is required. A preliminary agenda is available which is subject to change as the workshop date approaches.
September 2nd, 2011
The First Mid-Atlantic Student Colloquium on Speech, Language and Learning is a one-day event to be held at the Johns Hopkins University in Baltimore on Friday, 23 September 2011. Its goal is to bring together students taking computational approaches to speech, language, and learning, so that they can introduce their research to the local student community, give and receive feedback, and engage each other in collaborative discussion. Attendance is open to all and free but space is limited, so online registration is requested by September 16. The program runs from 10:00am to 5:00pm and will include oral presentations, poster sessions, and breakout sessions.
February 14th, 2011
There has been an ongoing discussion on the publication culture with the computer science research community in CACM, carried out through a series of editorials, opinion pieces, articles and letters. It covers the usual topics — the best role of workshops, conferences and journals, reviewer responsibility, the effect of deadlines on publications, etc. All important issues.
Jonathan Grudin has an opinion piece in the current (Feb) CACM
Technology, conferences, and community. J. Grudin, 2011. Comm. of the ACM, 54, 2, 41-43.
He has also made available a list of the 16 recent CACM articles (with links) on the topic. It’s a list of papers worth reading.
November 3rd, 2010
The First Baltimore Hackathon will take place on Friday and Saturday, November 19-20, 2010 at Beehive Baltimore, 2400 Boston St, on the 3rd floor of the Emerging Technology Center.
Come to build a hardware or software project — from idea to prototype — in a weekend either individually or as part of a team! While you are hacking, you’ll enjoy free food and coffee and be eligible to win prizes and awards! If you are interested, sign up and use the Baltimore Hackathon wiki to share ideas and build a team or to list yourself as available to join an existing team.
Check out the TechinBaltimore Google group for more information and discussion about the hackathon and related technology events in and around Baltimore.
May 29th, 2010
The June 2010 CACM has an interesting article by Jilin Chen and Joseph Konstan of the University of Minnesota on Conference Paper Selectivity and Impact. The abstract gets right to the point:
“Studying the metadata of the ACM Digital Library (http://www.acm.org/dl), we found that papers in low-acceptance-rate conferences have higher impact than those in high-acceptance-rate conferences within ACM, where impact is measured by the number of citations received. We also found that highly selective conferences — those that accept 30% or less of submissions—are cited at a rate comparable to or greater than ACM Transactions and journals.”
A key paragraph later in the paper has some more detail:
“Addressing the second question— on how much impact conference papers have compared to journal papers — in Figures 3 and 4, we found that overall, journals did not outperform conferences in terms of citation count; they were, in fact, similar to conferences with acceptance rates around 30%, far behind conferences with acceptance rates below 25% (T-test, T = 24.8, p< .001). Similarly, journals published as many papers receiving no citations in the next two years as conferences accepting 35%–40% of submissions, a much higher low-impact percentage than for highly selective conferences. The same analyses over four- and eight-year periods yielded results consistent with the two-year period; journal papers received significantly fewer citations than conferences where the acceptance rate was below 25%."
Impact of CS conferences vs. journals
We have to assume that this study is only applicable to Computer Science, for which the ACM digital library is a very good sample, and not other disciplines (e.g., EE) or even narrow sub-disciplines within CS. Different disciplines have very different publication patterns. But it does confirm our own anecdotal evidence from tracking citations to papers written in our ebiquity lab over the past ten years — those published din top conferences tend to get more citations than those in journals.
July 9th, 2009
||10 Aug 09
||19 Aug 09
||2 Sept 09
||26 Oct 09
Semantics for the Rest of Us: Variants of Semantic Web Languages in the Real World is a workshop that will be held at the on 26 October 2009 in Washington, DC.
The Semantic Web is a broad vision of the future of personal computing, emphasizing the use of sophisticated knowledge representation as the basis for end-user applications’ data modeling and management needs. Key to the pervasive adoption of Semantic Web technologies is a good set of fundamental “building blocks” – the most important of these are representation languages themselves. W3C’s standard languages for the Semantic Web, RDF and OWL, have been around for several years. Instead of strict standards compliance, we see “variants” of these languages emerge in applications, often tailored to a particular application’s needs. These variants are often either subsets of OWL or supersets of RDF, typically with fragments OWL added. Extensions based on rules, such as SWRL and N3 logic, have been developed as well as enhancements to the SPARQL query language and protocol.
This workshop will explore the landscape of RDF, OWL and SPARQL variants, specifically from the standpoint of “real-world semantics”. Are there commonalities in these variants that might suggest new standards or new versions of the existing standards? We hope to identify common requirements of applications consuming Semantic Web data and understand the pros and cons of a strictly formal approach to modeling data versus a “scruffier” approach where semantics are based on application requirements and implementation restrictions.
The workshop will encourage active audience participation and discussion and will include a keynote speaker as well as a panel. Topics of interest include but are not limited to
- Real world applications that use (variants of) RDF, OWL, and SPARQL
- Use cases for different subsets/supersets of RDF, OWL, and SPARQL
- Extensions of SWRL and N3Logic
- RIF dialects
- How well do the current SW standards meet system requirements ?
- Real world “semantic” applications using other structured representations (XML, JSON)
- Alternatives to RDF, OWL or SPARQL
- Are ad hoc subsets of SW languages leading to problems?
- What level of expressive power does the Semantic Web need?
- Does the Semantic Web require languages based on formal methods?
- How should standard Semantic Web languages be designed?
We seek two kinds of submissions: full papers up to ten pages long and position papers up to five pages long. Format papers according the ISWC 2009 instructions. Accepted papers will be presented at the workshop and be part of the workshop proceedings.
March 19th, 2009
Yesterday the MIT faculty approved a university-wide open access policy. The full txt of the resolution, which passed unanimously, i available on Peter Suber’s Open Access News blog. Here’s an excerpt.
“Each Faculty member grants to the Massachusetts Institute of Technology nonexclusive permission to make available his or her scholarly articles and to exercise the copyright in those articles for the purpose of open dissemination. In legal terms, each Faculty member grants to MIT a nonexclusive, irrevocable, paid-up, worldwide license to exercise any and all rights under copyright relating to each of his or her scholarly articles, in any medium, provided that the articles are not sold for a profit, and to authorize others to do the same. The policy will apply to all scholarly articles written while the person is a member of the Faculty except for any articles completed before the adoption of this policy and any articles for which the Faculty member entered into an incompatible licensing or assignment agreement before the adoption of this policy. … The Provost’s Office will make the scholarly article available to the public in an open- access repository. The Office of the Provost, in consultation with the Faculty Committee on the Library System will be responsible for interpreting this policy, resolving disputes concerning its interpretation and application, and recommending changes to the Faculty.
I have to say I am conflicted about this and wish I was more informed. As a researcher, I am 100% for the right to make papers describing our results freely available. But I also recognize that publishers and professional societies are an essential part of our research infrastructure and their business models are partially built on copyright and controlling access to content.
Just as we are seeing the big changes in main stream media, we will probably see related changes in publishers, including professional societies. We’ll have to wait and see if they represent a phase shift to a new and better model or simply the collapse of the old one.
The analogy between the two is far from perfect. Traditional MSM publishers pay a professional staff to research, write and edit stories. Journal publishers and professional societies don’t typically pay their authors who increasingly deliver camera ready copy or near camera-ready electronic copy.
February 12th, 2009
Stimulus funding for research and science has done well in the version of the American Economic Recovery and Reinvestment Act coming out of conference. The conference report overview identifies a category that will:
“Transform our Economy with Science and Technology: To secure America’s role as a world leader in a competitive global economy, we are renewing America’s investments in basic research and development, in training students for an innovation economy, and in deploying new technologies into the marketplace. This will help businesses in every community succeed in a global economy.”
The CRA policy blog has the details in House Numbers for Science Prevail in Stimulus Conference. Highlights of the $15B+ to be invested in scientific research include:
- Provides $3 billion for the National Science Foundation, for basic research in fundamental science and engineering – which spurs discovery and innovation.
- Provides $1.6 billion for the Department of Energy’s Office of Science, which funds research in such areas as climate science, biofuels, high-energy physics, nuclear physics and fusion energy sciences – areas crucial to our energy future.
- Provides $400 million for the Advanced Research Project Agency-Energy (ARPA-E) to support high-risk, high-payoff research into energy sources and energy efficiency in collaboration with industry.
- Provides $580 million for the National Institute of Standards and Technology, including the Technology Innovation Program and the Manufacturing Extension Partnership.
- Provides $8.5 billion for NIH, including expanding good jobs in biomedical research to study diseases such as Alzheimer’s, Parkinson’s, cancer, and heart disease.
- Provides $1 billion for NASA, including $400 million to put more scientists to work doing climate change research.
- Provides $1.5 billion for NIH to renovate university research facilities and help them compete for biomedical research grants.
January 15th, 2009
The CRA reports that the US science and technology research community may get it’s own little bailout. The House Appropriations Committee released details of their American Recovery and Reinvestment economic stimulus package that includes funds for scientific research.
NSF is slated to get $3B in new money:
“including $2 billion for expanding employment opportunities in fundamental science and engineering to meet environmental challenges and to improve global economic competitiveness, $400 million to build major research facilities that perform cutting edge science, $300 million for major research equipment shared by institutions of higher education and other scientists, $200 million to repair and modernize science and engineering research facilities at the nation’s institutions of higher education and other science labs, and $100 million is also included to improve instruction in science, math and engineering”
The plan also calls for new research money for NIH, DOE, NASA, NIST and other government organizations as well as $6B for broadband deployment.
While this is not large as bailouts go, we must keep in mind it was done without a crisis brought about by the rampant use of research breakthrough default swap instruments or scholarly paper citation pyramid schemes. Maybe we should have gotten MBAs.
Update 1/16: The CRA policy blog has some more details on how the funds will be allocated within some of the agencies.