2015 Ontology Summit: Internet of Things: Toward Smart Networked Systems and Societies

January 14th, 2015

The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure.

The theme of the 2015 Ontology Summit is Internet of Things: Toward Smart Networked Systems and Societies. The Ontology Summit is an annual series of events (first started by Ontolog and NIST in 2006) that involve the ontology community and communities related to each year’s theme.

The 2015 Summit will hold a virtual discourse over the next three months via mailing lists and online panel sessions augmented conference calls. The Summit will culminate in a two-day face-to-face workshop on 13-14 April 2015 in Arlington, VA. The Summit’s goal is to explore how ontologies can play a significant role in the realization of smart networked systems and societies in the Internet of Things.

The Summit’s initial launch session will take place from 12:30pm to 2:00pm EDT on Thursday, January 15th and will include overview presentations from each of the four technical tracks. See the 2015 Ontology Summit for more information, the schedule and details on how to participate in these free an open events.

True Knowledge launches Evi question answering mobile app

January 29th, 2012

UK semantic technology company True Knowledge has released Evi, a mobile app that competes with Siri.

The mobile app is available on the Android Market and on iTunes. You can pose queries to either by speaking or typing. The Android app uses Google’s ASR speech technology and the iTunes app uses Nuance.

True Knowledge has been developing a natural answering question answering system since 2007. You can query the True Knowledge online via a Web interface. Tty the following links for some examples:

The Evi app has a number of additional features beyond the Web-based True Knowledge QA system and these wil probably be expanded on in the months to come.

See the Technology Review story, New Virtual Helper Challenges Siri, for more information.

Google lobbies Nevada to allow self-driving cars

May 11th, 2011

A story in yesterday’s NYT, Google Lobbies Nevada To Allow Self-Driving Cars, reports that Google has hired a Nevada lobbyist to promote two bills related to autonomous vehicles that are expected to be voted on this summer.

“Google hired David Goldwater, a lobbyist based in Las Vegas, to promote the two measures, which are expected to come to a vote before the Legislature’s session ends in June. One is an amendment to an electric-vehicle bill providing for the licensing and testing of autonomous vehicles, and the other is the exemption that would permit texting.”

Arguments the lobbyist offered included that “the autonomous technology would be safer than human drivers, offer more fuel-efficient cars and promote economic development.”

I’d add that the Google Bot has a clean driving record, exhibits an excellent sense of direction, will obey any laws inserted into a state’s robots.txt, and does not drink. However, the Google Bot’s current cars are all Toyotas and an Audis. Maybe the Nevada legislator should find a way to encourage it to support the US auto industry and buy some American cars.

I liked project leader Sebastian Thrun’s example of a potential benefit of autonomous vehicles.

“In frequent public statements, he has said robotic vehicles would increase energy efficiency while reducing road injuries and deaths. And he has called for sophisticated systems for car sharing that, he says, could cut the number of cars in the United States in half. “What if I could take out my phone and say, ‘Zipcar, come here,’ ” he asked an industry conference last year, “and a moment later the Zipcar came around the corner?””

AAAI-11 Workshop on Activity Context Representation: Techniques and Languages

March 14th, 2011

Mobile devices and provide better services if then can model, recognize and adapt to their users' context.

Pervasive, context-aware computing technologies can significantly enhance and improve the coming generation of devices and applications for consumer electronics as well as devices for work places, schools and hospitals. Context-aware cognitive support requires activity and context information to be captured, reasoned with and shared across devices — efficiently, securely, adhering to privacy policies, and with multidevice interoperability.

The AAAI-11 conference will host a two-day workshop on Activity Context Representation: Techniques and Languages focused on techniques and systems to allow mobile devices model and recognize the activities and context of people and groups and then exploit those models to provide better services. The workshop will be held on August 7th and 8th in San Francisco as part of AAAI-11, the Twenty-Fifth Conference on Artificial Intelligence. Submission of research papers and position statements are due by 22 April 2011.

The workshop intends to lay the groundwork for techniques to represent context within activity models using a synthesis of HCI/CSCW and AI approaches to reduce demands on people, such as the cognitive load inherent in activity/context switching, and enhancing human and device performance. It will explore activity and context modeling issues of capture, representation, standardization and interoperability for creating context-aware and activity-based assistive cognition tools with topics including, but not limited to the following:

  • Activity modeling, representation, detection
  • Context representation within activities
  • Semantic activity reasoning, search
  • Security and privacy
  • Information integration from multiple sources, ontologies
  • Context capture

There are three intended end results of the workshop: (1) Develop two-three key themes for research with specific opportunities for collaborative work. (2) Create a core research group forming an international academic and industrial consortium to significantly augment existing standards/drafts/proposals and create fresh initiatives to enable capture, transfer, and recall of activity context across multiple devices and platforms used by people individually and collectively. (3) Review and revise an initial draft of structure of an activity context exchange language (ACEL) including identification of use cases, domain-specific instantiations needed, and drafts of initial reasoning schemes and algorithms.

For more information, see the workshop call for papers.

Lisp bots win Planet Wars Google AI Challenge

December 2nd, 2010

top programming languages in Planet Wars
The Google-supported Planet Wars Google AI Challenge had over 4000 entries that used AI and game theory to compete against one another. C at the R-Chart blog analyzed the programming languages used by the contestants with some interesting results.

The usual suspects were the most popular languages used: Java, C++, Python, C# and PHP. The winner, Hungarian Gábor Melis, was just one of 33 contestants who used Lisp. Even less common were entries in C, but the 18 “C hippies” did remarkably well.

Blogger C wonders if Lisp was the special sauce:

Paul Graham has stated that Java was designed for “average” programmers while other languages (like Lisp) are for good programmers. The fact that the winner of the competition wrote in Lisp seems to support this assertion. Or should we see Mr. Melis as an anomaly who happened to use Lisp for this task?

Google robot-controlled car frees users to text

October 9th, 2010

No, this is not an article from The Onion, but Google is working on a computer-controlled car. Two articles for tomorrow’s New York Times describe a research project at Google on developing an autonomous vehicle. Here is a picture of the prototype.

Google autonomous vehicle

In the science science section, John Markoff has a story Google Cars Drive Themselves, in Traffic.

“Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving. A self-driving car developed and outfitted by Google, with device on roof, cruising along recently on Highway 101 in Mountain View, Calif. The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.”

A companion article, also by Markoff, has some additional material, including this interesting note on the current approach.

“One main technique used by the Google team is known as SLAM, or simultaneous localization and mapping, which builds and updates a map of a vehicle’s surroundings while keeping the vehicle located within the map. To make a SLAM map, the car is first driven manually along a route while its sensors capture location, feature and obstacle data. Then a group of software engineers annotates the maps, making certain that road signs, crosswalks, street lights and unusual features are all embedded. The cars then drive autonomously over the mapped routes, recording changes as they occur and updating the map. The researchers said they were surprised to find how frequently the roads their robots drove on had changed.”

The project was the idea of Stanford computer science professor Sebastian Thrun who is also a Principal Engineer at Google, where he helped invent the Street View mapping service. Thrun has led the Stanford team that developed the Stanley robot car which won the 2005 DARPA Grand Challenge that was focused on developing autonomous vehicle technology.

It’s not clear what is the business case for this Google research project. But Google has the cash and the intellectual capital that might actually develop something in this space that can make money.

In a Google blog post from earlier today, What we’re driving at, Thrun gives one motivation.

“Larry and Sergey founded Google because they wanted to help solve really big problems using technology. And one of the big problems we’re working on today is car safety and efficiency. Our goal is to help prevent traffic accidents, free up people’s time and reduce carbon emissions by fundamentally changing car use.

So we have developed technology for cars that can drive themselves. Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard. They’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe. All in all, our self-driving cars have logged over 140,000 miles. We think this is a first in robotics research.”

update: Techcrunch has an article speculating on the possible business applications, World-Changing Awesome Aside, How Will The Self-Driving Google Car Make Money?.

An agent-based model of the peer-review process

September 19th, 2010

The peer review process is central to most research disciplines and is used in the selection of papers for publication and research proposals for funding.

A new paper by Stefan Thurner and Rudolf Hanel develops an agent-based model of the scientific peer review process, Peer-review in a world with rational scientists: Toward selection of the average.

“… we are interested in the effects of rational referees, who might not have any incentive to see high quality work other than their own published or promoted. We find that a small fraction of incorrect (selfish or rational) referees can drastically reduce the quality of the published (accepted) scientific standard. We quantify the fraction for which peer review will no longer select better than pure chance. Decline of quality of accepted scientific work is shown as a function of the fraction of rational and unqualified referees. We show how a simple quality-increasing policy of e.g. a journal can lead to a loss in overall scientific quality, and how mutual support-networks of authors and referees deteriorate the system.”

Their agent model has several reviewers types:

  • The correct: Accepts good and rejects bad papers.
  • The stupid: This referee can not judge the quality of a paper (e.g. because of incompetence or lack of time) and takes a random decision on a paper.
  • The rational: The rational referee knows that work better than his/her own might draw attention away from his/her own work. For him there is no incentive to accept anything better than one’s own work, while it might be fine to accept worse quality.
  • The altruist: Accepts all papers.
  • The misanthropist: Rejects all papers.

I’ve known them all, as I am sure many of us have. As an editor or program chair I’ve met a few other types, including these:

  • The Bartleby: His or her response to an invitation is always “I would prefer not to.”
  • The Black Hole: Messages go in and nothing ever comes out.
  • The Gary Cooper: A person of few words, even when many are called for.
  • The Perseverator: Sees all sides of any decision and keeps all carefull in balance. Usually recommends “major revision”.

I am sure I’ve overlooked some — suggest your own via a comment.

(h/t Shlomo Argamon)

Call for bids to host AAMAS-2013

September 16th, 2010

This is a call for bids to host the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS) in 2013. Bids will be considered from all geographical regions; however, for the 2013 conference, we particularly encourage bids from the Americas.

Bids are sought from volunteers from the scientific community, though they may be supported by paid meeting professionals.

All correspondence regarding bids should be directed by email to the IFAAMAS Conference Committee Chair (Munindar P. Singh, singh@ncsu.edu) and Chair Elect (Onn Shehory, ONN@il.ibm.com).

Bids should be made by individuals or small groups, with the backing of a host institution, typically a university or research center. Groups or individuals who are planning to submit a bid should notify Drs. Singh and Shehory of their intention as soon as possible.

  • Now: Expression of interest and queries
  • November 17, 2010: Submission of final bid
  • November 18, 2010-February 28, 2011: Potential discussions with bidders; internal discussions in the IFAAMAS Board
  • March 1, 2011: Decision

See the full AAMAS-2013 call for bids for more information

Prisoners Dilemma and the Golden Balls game show

October 25th, 2009

Golden Balls is a UK game show with a final round, Split or Steal, that is similar to the prisoner’s dilemma. The two contestants have to simultaneously choose to split the prize or try to steal it. If both choose split, they each get half. If one chooses split and the other steal, than the stealer gets it all. If they both choose steal, neither gets anything. While the payoff matrix is not exactly that for the PD, it has a similar effect on the strategy. Check out this video of a Split or Steal round for £100,000. (Spotted on Hacker News)

RAEng report on Social, legal and ethical issues of autonomous systems

August 21st, 2009

RAEng report on Social, legal and ethical issues of autonomous systems

The Royal Academy of Engineering has released a report on the social, legal and ethical issues involving autonomous systems — systems that are adaptive, learn and can make decisions without the intervention or supervision of a human.

The report, Autonomous Systems: Social, Legal and Ethical Issues (pdf), was based on a roundtable discussion “from a wide range of experts, looking at the areas where autonomous systems are most likely to emerge first, and discussing the broad ethical issues surrounding their uptake.”

While autonomous systems have broad applicability, the report focuses on two areas: transportation (e.g. autonomous road vehicles) and personal care (e.g., smart homes).

“Autonomous systems, such as fully robotic vehicles that are “driverless” or artificial companions that can provide practical and emotional support to isolated people, have a level of self-determination and decision making ability with the capacity to learn from past performance. Autonomous systems do not experience emotional reactions and can therefore perform better than humans in tasks that are dull, risky or stressful. However they bring with them a new set of ethical problems. What if unpredicted behaviour causes harm? If an unmanned vehicle is involved in an accident, who is responsible – the driver or the systems engineer? Autonomous vehicles could provide benefits for road transport with reduced congestion and safety improvements but there is a lack of a suitable legal framework to address issues such as insurance and driver responsibility.

The technologies for smart homes and patient monitoring are already in existence and provide many benefits to older people, such as allowing them to remain in their own home when recovering from an illness, but they could also lead to isolation from family and friends. Some users may be unfamiliar with the technologies and be unable to give consent to their use.”

The RAEng report recommends “engaging early in public consultation” and working to establish “appropriate regulation and governance so that controls are put in place to guide the development of these systems”.

rdf:SeeAlso Autonomous tech ‘requires debate’; Scientists ponder rules and ethics of robo helpers; Robot cats could care for older Britons.

(via Mike Wooldridge)

Often in error, rarely in doubt: confidence trumps expertise

June 14th, 2009

The new Scientist reports on a recent paper by CMU psychologist Don Moore that shows that people prefer advice from confident sources even when they have a poor track record.

Moore argues that in competitive situations, this can drive those offering advice to increasingly exaggerate how sure they are. And it spells bad news for scientists who try to be honest about gaps in their knowledge.

In Moore’s experiment, volunteers were given cash for correctly guessing the weight of people from their photographs. In each of the eight rounds of the study, the guessers bought advice from one of four other volunteers. The guessers could see in advance how confident each of these advisers was (see table), but not which weights they had opted for.

Describing his work at an Association for Psychological Science meeting in San Francisco last month, Moore said that following the advice of the most confident person often makes sense, as there is evidence that precision and expertise do tend to go hand in hand. For example, people give a narrower range of answers when asked about subjects with which they are more familiar”

Why aren’t we better at recognizing cover-confidence? There must be some evolutionary fitness in this, at least for humans. There can be a big penalty in indecision or vacillation. I wonder if we will see the same phenomenon in systems of cooperating autonomous agents?

Here’s the paper:

Joseph R. Radzevick and Don A. Moore, Competing To Be Certain (But Wrong): Social Pressure and Overprecision in Judgment, 21st Annual Convention of the Association for Psychological Science, May 2009.

Overprecision in judgment is both the most robust and the least understood form of overconfidence. Overly precise judgments claim more certainty than is objectively warranted. In this paper, we investigate whether the competitive social pressure of a market contributes to overprecision among those competing for influence. We find evidence that markets do indeed exacerbate overprecision. This evidence comes from two experiments in which advisors attempt to sell their advice. In the first experiment, advisors must compete with other advice sellers. In the second, advisors and decision makers are paired. Overprecision exists in both studies, and it helps advisors’ sell their advice. However, the market also exacerbates overprecision. We discuss the strategic implications of these results.

Google Wave as a new communication model

May 28th, 2009

Google wave looks interesting. Google describes it as “a new tool for communication and collaboration on the web” and it’s a funny mix of email, instant messaging, wikis, and Facebook wall interactions. Or maybe IRC for the new century. This is from a post, Went Walkabout. Brought back Google Wave, on the Google blog.

“A “wave” is equal parts conversation and document, where people can communicate and work together with richly formatted text, photos, videos, maps, and more. Here’s how it works: In Google Wave you create a wave and add people to it. Everyone on your wave can use richly formatted text, photos, gadgets, and even feeds from other sources on the web. They can insert a reply or edit the wave directly. It’s concurrent rich-text editing, where you see on your screen nearly instantly what your fellow collaborators are typing in your wave. That means Google Wave is just as well suited for quick messages as for persistent content — it allows for both collaboration and communication. You can also use “playback” to rewind the wave and see how it evolved.”

Google Wave is not available yet, but you can sign up to be notified when it’s launched.

Here’s a random thought. Our models for communication in multiagent systems (e.g., KQML and FIPA) were informed by if not based on email and, to a lesser degree, IM. If Wave is a useful new communication model for humans, does it have a counterpart for software agents? If so, I suspect that ideas from the Semantic Web will be useful to provide a “rich content” for agents.

For more views, see posts by o’reilly, techcrunch, BusinessWeek and Gabor Cselle.