UMBC ebiquity
Mobile Computing

Archive for the 'Mobile Computing' Category

Memoto lifelogging camera

March 9th, 2013, by Tim Finin, posted in Mobile Computing, Privacy

Memoto is a $279 lifelogging camera takes a geotagged photo every 30 seconds, holds 6K photos, and runs for several days without recharging. The company producing Memoto is a Swedish company intially funded via kickstarter and expects to start shipping the wearable camera in April 2013. The company will also offer “safe and secure infinite photo storage at a flat monthly fee, which will always be a lot more affordable than hard drives.”

The lifelogging idea has been around for many years but has yet to become propular. One reason is privacy concerns. DARPA’s IPTO office, for example, started a LifeLog program in 2004 which was almost immediately canceled after criticism from civil libertarians concerning the privacy implications of the system.

How many #ifihadglass posts were there?

February 28th, 2013, by Tim Finin, posted in Gadgets, Mobile Computing, Wearable Computing

UMBC CSEE department members submitted a number of #ifihadglass posts hoping to get an invitation to pre-order a Google Glass device. Several came from the UMBC Ebiquity Lab including this one that builds on our work with context-aware mobile phones.

Reports are that as many as 8,000 of the submitted ideas will be invited to the first round of pre-orders. To get a rough idea of our odds, I tried using Google and Bing searches to estimate the number of submissions. A general search for pages with the #ifihadglass tag returned 249K hits on Google. Of these 21K were from twitter and less than 4K from Google+. I’m not sure which of the twitter and Google+ posts get indexed and how long it takes, but I do know that our entry above did not show up in the results. Bing reported 171K results for a search on the hash tag, but our post was not among them. I tried the native search services on both Twitter and Google+, but these are oriented toward delivering a stream of new results and neither gives an estimate of the total number of results. I suppose one could do this for Twitter using their custom search API, but even then I am not sure how accurately one could estimate the total number of matching tweets.

Can anyone suggest how to easily estimate the number of #ifihadglass posts on twitter and Google+?

True Knowledge launches Evi question answering mobile app

January 29th, 2012, by Tim Finin, posted in Agents, AI, Mobile Computing, NLP, Semantic Web

UK semantic technology company True Knowledge has released Evi, a mobile app that competes with Siri.

The mobile app is available on the Android Market and on iTunes. You can pose queries to either by speaking or typing. The Android app uses Google’s ASR speech technology and the iTunes app uses Nuance.

True Knowledge has been developing a natural answering question answering system since 2007. You can query the True Knowledge online via a Web interface. Tty the following links for some examples:

The Evi app has a number of additional features beyond the Web-based True Knowledge QA system and these wil probably be expanded on in the months to come.

See the Technology Review story, New Virtual Helper Challenges Siri, for more information.

NFC and Google’s mobile wallet

October 7th, 2011, by Tim Finin, posted in Gadgets, Mobile Computing

Yesterday I made a purchase at the CVS store on Edmondson Avenue in Catonsville using Google Wallet on a Nexus S 4G phone with NFC.

NFC is near field communication, an RFID technology that allows communication and data exchange between two devices in close proximity, e.g., within a few inches.

Several current smartphones have NFC chips including the Samsung's Google-branded Nexus S 4G and more are expected to include it in the coming months and years.

The first, and perhaps most significant, use of NFC will be enabling mobile phones to serve as "virtual credit cards", especially for small amounts that don't require a signature. The range of potential applications is much greater and will no doubt evolve as mobile NFC-enabled devices become ubiquitous.

Buying something at the CVS (OK, … it was candy) this way was fun. My phone made satisfying noises as it talked to CVS's payment station and the clerk, who had not had anyone use a NFC device, was properly mystified. Using it was marginally easier than swiping a credit card, but maybe even a small amount of increased convenience is worth it for such an everyday transaction.

One limitation of Google Wallet is that it currently only works with Sprint on a Nexus S 4G and with either a Citi® MasterCard® card or a Google Prepaid Card. You can load money into the latter with most any credit card and Google will get you started by adding $10 to it as an incentive.

By the way, for what it’s worth, I only recently realized that the robots in Philip K. Dick’s novel “Do Androids Dream of Electric Sheep?” were called androids and the dangerously independent new model was the Nexus-6, developed by designed by the Tyrell Corporation.

AAAI-11 Workshop on Activity Context Representation: Techniques and Languages

March 14th, 2011, by Tim Finin, posted in Agents, AI, KR, Mobile Computing, Pervasive Computing, Semantic Web

Mobile devices and provide better services if then can model, recognize and adapt to their users' context.

Pervasive, context-aware computing technologies can significantly enhance and improve the coming generation of devices and applications for consumer electronics as well as devices for work places, schools and hospitals. Context-aware cognitive support requires activity and context information to be captured, reasoned with and shared across devices — efficiently, securely, adhering to privacy policies, and with multidevice interoperability.

The AAAI-11 conference will host a two-day workshop on Activity Context Representation: Techniques and Languages focused on techniques and systems to allow mobile devices model and recognize the activities and context of people and groups and then exploit those models to provide better services. The workshop will be held on August 7th and 8th in San Francisco as part of AAAI-11, the Twenty-Fifth Conference on Artificial Intelligence. Submission of research papers and position statements are due by 22 April 2011.

The workshop intends to lay the groundwork for techniques to represent context within activity models using a synthesis of HCI/CSCW and AI approaches to reduce demands on people, such as the cognitive load inherent in activity/context switching, and enhancing human and device performance. It will explore activity and context modeling issues of capture, representation, standardization and interoperability for creating context-aware and activity-based assistive cognition tools with topics including, but not limited to the following:

  • Activity modeling, representation, detection
  • Context representation within activities
  • Semantic activity reasoning, search
  • Security and privacy
  • Information integration from multiple sources, ontologies
  • Context capture

There are three intended end results of the workshop: (1) Develop two-three key themes for research with specific opportunities for collaborative work. (2) Create a core research group forming an international academic and industrial consortium to significantly augment existing standards/drafts/proposals and create fresh initiatives to enable capture, transfer, and recall of activity context across multiple devices and platforms used by people individually and collectively. (3) Review and revise an initial draft of structure of an activity context exchange language (ACEL) including identification of use cases, domain-specific instantiations needed, and drafts of initial reasoning schemes and algorithms.

For more information, see the workshop call for papers.

Journal of Web Semantics special issues on context and mobility

March 6th, 2011, by Tim Finin, posted in Mobile Computing, Semantic Web

The Journal of Web Semantics has announced two new special issues to be published in 2010.

An issue on Reasoning with context in the Semantic Web seeks papers by June 15, 2011 and will be published in the Spring of 2012. The special issue will be edited by Alan Bundy and Jos Lehmann of the University of Edinburgh and Ivan Varzinczak of the Meraka Institute.

An issue on The Semantic Web in a Mobile World will accept submission until October 1, 2011 and will be published in September 2012. The special issue will be edited by Ansgar Scherp of the University of Koblenz-Landau and Anupam Joshi of the University of Maryland, Baltimore County.

Android to support near field communication

November 15th, 2010, by Tim Finin, posted in Google, Mobile Computing, Pervasive Computing, RFID

As TechCrunch and others report, Google’s Eric Schmidt announced that the next version of Android (Gingerbread 2.3) will support near field communication. What?

Wikipedia explains that NFC refers to RFID and RFID-like technology commonly used for contactless smart cards, mobile ticketing, and mobile payment systems.

Near Field Communication or NFC, is a short-range high frequency wireless communication technology which enables the exchange of data between devices over about a 10 centimeter (around 4 inches) distance.”

The next iphone is rumored to have something similar.

Support for NFC in popular smart phones could unleash lots of interesting applications, many of which have already been explored in research prototypes in labs around the world. One interesting possibility is that this could be used to allow android devices to share RDF queries and data with other devices.

Smart phones to absorb credit cards with RFID?

October 5th, 2010, by Tim Finin, posted in Apple, Mobile Computing, RFID

iphone + RFID + credit cards Fastcompany has an article, Credit Cards Will Go Electronic, Then Disappear Into iPhone 5, predicting the merger of RFID-enabled credit cards and smart phones.

“Nokia plans to add antennas and RFID communications chips into its phones soon, and Apple has been patenting the heck out of the idea, but both companies were probably going to rely on an in-phone antenna loop. It seems increasingly certain Apple is going to bring RFID into common usage with the iPhone for 2011 (the iPhone 5) because there’s a new patent that shows just how far Apple has gone with design thinking for RFID. The patent shows how an RFID loop, powerful enough to act as both RFID tag or a tag-reader, can actually be built right into the complex layered circuitry of the iPhone (or iPod Touch) screen. We know Apple is fond of highly-polished design and integration, and this innovation is no exception. The screen has to be exposed by its very nature, which is good for RFID purposes — the wireless signal is unobstructed by other bulk in the smartphone, and it frees up Apple to do what it likes with the rest of the phone’s design.”

Maybe building RFID into smart phones will finally unleash the potential the technology offers for cool people oriented applications, as opposed to boring inventory management tasks. However, I don’t like the idea of not being able to use my credit card because my phone ran out of power.

Taintdroid catches Android apps that leak private user data

September 30th, 2010, by Tim Finin, posted in Mobile Computing, Privacy, Security, Social

Ars Technica has an an article on bad Android apps, Some Android apps caught covertly sending GPS data to advertisers.

“The results of a study conducted by researchers from Duke University, Penn State University, and Intel Labs have revealed that a significant number of popular Android applications transmit private user data to advertising networks without explicitly asking or informing the user. The researchers developed a piece of software called TaintDroid that uses dynamic taint analysis to detect and report when applications are sending potentially sensitive information to remote servers.

They used TaintDroid to test 30 popular free Android applications selected at random from the Android market and found that half were sending private information to advertising servers, including the user’s location and phone number. In some cases, they found that applications were relaying GPS coordinates to remote advertising network servers as frequently as every 30 seconds, even when not displaying advertisements. These findings raise concern about the extent to which mobile platforms can insulate users from unwanted invasions of privacy.”

TaintDroid is an experimental system that “analyses how private information is obtained and released by applications ‘downloaded’ to consumer phones”. A paper on the system will be presented at the 2010 USENIX Symposium on Operating Systems Design and Implementation later this month.

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones, William Enck, Peter Gilbert, Byung-gon Chun, Landon P. Cox, Jaeyeon Jung, Patrick McDaniel, and Anmol N. Sheth, OSDI, October 2010.

The project, Realtime Privacy Monitoring on Smartphones has a good overview site with a FAQ and demo.

This is just one example of a rich and complex area full of trade-offs. We want our systems and devices to be smarter and to really understand us — our preferences, context, activities, interests, intentions, and pretty much everything short of our hopes and dreams. We then want them to use this knowledge to better serve us — selecting music, turing the ringer on and off, alerting us to relevant news, etc. Developing this technology is neither easy nor cheap and the developers have to profit from creating it. Extracting personal information that can be used or sold is one model — just as Google and others do to provide better ad placement on the Web.

Here’s a quote from the Ars Technical article that resonated with me.

“As Google says in its list of best practices that developers should adopt for data collection, providing users with easy access to a clear and unambiguous privacy policy is really important.”

We, and many others, are trying to prepare for the next step — when users can define their own privacy policies and these will be understood and enforced by their devices.

Smart phones recognize users gait

September 16th, 2010, by Tim Finin, posted in Mobile Computing

Technology review has a short article on new work on doing gait analysis with the accelerometers built into many smart phones, Smart Phones that Know Their Users by How They Walk. The results are from the following paper:

Mohammad O. Derawi, Claudia Nickel, Patrick Bours and Christoph Busch, Unobtrusive User-Authentication on Mobile Phones using Biometric Gait Recognition, The Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Darmstadt, 15-17 October 2010.

Abstract: The need for more security on mobile devices is increasing with new functionalities and features made available. To improve the device security we propose gait recognition as a protection mechanism. Unlike previous work on gait recognition, which was based on the use of video sources, floor sensors or dedicated high-grade accelerometers, this paper reports the performance when the data is collected with a commercially available mobile device containing low-grade accelerometers. To be more specific, the used mobile device is the Google G1 phone containing the AK8976A embedded accelerometer sensor. The mobile device was placed at the hip on each volunteer to collect gait data. Preproccesing, cycle detection and recognition-analysis were applied to the acceleration signal. The performance of the system was evaluated having 51 volunteers and resulted in an equal error rate (EER) of 20%.

The potential application is that a phone could recognize that it may have been stolen if it is being carried by a person with a different gait. I guess it would then phone home with it’s location, not unlike the golden harp in some version of Jack in the Beanstalk.

The accuracy would have to be improved to make this practical, of course, and it might not be a killer app, but it is a good example of how passive sensing by smart phones can acquire useful context information.

Google Open Spot Android app finds parking

July 9th, 2010, by Tim Finin, posted in Google, Mobile Computing, Semantic Web, Social media

sf_retrieving_spotGoogle’s Open Spot Android app lets people leaving parking spots share the information with others searching for parking nearby. Running the app shows you parking spots within a 1.5km. New parking spots are assumed to be gone after 20 minutes and removed from the system.

People who announce open spots gain karma points, while those who report false spots, known as griefers, are on notice:

“We’re watching for behavior that looks like a griefer spoofing parking spots. We have a couple of mechanisms available to make sure someone can’t leave a bunch of fake parking spots. If we see this happening we will take steps to fix it.

This is a simple example of a context-aware mobile app that can further benefit from also knowing that you are driving, as opposed to riding, in your car and likely to want to find a parking spot, as opposed to doing 70mph on I-95 as it goes through Baltimore. Moreover, context would also inform that app that you are probably leaving a public parking spot and mark it automatically. However, such a feature should be smart enough to avoid being tagged by Google as a griefer and finding out what punishment Google has in store for you.

USCYBERCOM secret revealed

July 8th, 2010, by Tim Finin, posted in GENERAL, Mobile Computing, Security, Semantic Web
USCYBERCOM logo.  Click to enlarge.

The secret message embedded in the USCYBERCOM logo


is what the md5sum function returns when applied to the string that is USCYBERCOM’s official mission statement. Here’s a demonstration of this fact done on a Mac. On linux, use the md5sum command instead of md5.

~> echo -n "USCYBERCOM plans, coordinates, integrates, \
synchronizes and conducts activities to: direct the \
operations and defense of specified Department of \
Defense information networks and; prepare to, and when \
directed, conduct full spectrum military cyberspace \
operations in order to enable actions in all domains, \
ensure US/Allied \ freedom of action in cyberspace and \
deny the same to our adversaries." | md5

md5sum is a standard Unix command that computes a 128 bit “fingerprint” of a string of any length. It is a well designed hashing function that has the property that its very unlikely that any two non-identical strings in the real world will have the same md5sum value. Such functions have many uses in cryptography.

Thanks to Ian Soboroff for spotting the answer on Slashdot and forwarding it.

Someone familiar with md5 would recognize that the secret string has the same length and character mix as an md5 value — 32 hexadecimal characters. Each of the possible hex characters (0123456789abcdef) represents four bits, so 32 of them is a way to represent 128 bits.

We’ll leave it as an exercise for the reader to compute the 128 bit sequence that our secret code corresponds to.

You are currently browsing the archives for the Mobile Computing category.

  Home | Archive | Login | Feed