Archive for the 'Wearable Computing' Category
March 27th, 2014, by Prajit Kumar Das, posted in Ebiquity, Google, Mobile Computing, Policy, Semantic Web, Social, Wearable Computing
If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.
FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.
Comfort at the cost of Privacy?
As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.
Privacy-Aware pictures to the rescue
FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.
Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.
What promises does it hold?
FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.
FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me
February 28th, 2013, by Tim Finin, posted in Gadgets, Mobile Computing, Wearable Computing
UMBC CSEE department members submitted a number of #ifihadglass posts hoping to get an invitation to pre-order a Google Glass device. Several came from the UMBC Ebiquity Lab including this one that builds on our work with context-aware mobile phones.
Reports are that as many as 8,000 of the submitted ideas will be invited to the first round of pre-orders. To get a rough idea of our odds, I tried using Google and Bing searches to estimate the number of submissions. A general search for pages with the #ifihadglass tag returned 249K hits on Google. Of these 21K were from twitter and less than 4K from Google+. I’m not sure which of the twitter and Google+ posts get indexed and how long it takes, but I do know that our entry above did not show up in the results. Bing reported 171K results for a search on the hash tag, but our post was not among them. I tried the native search services on both Twitter and Google+, but these are oriented toward delivering a stream of new results and neither gives an estimate of the total number of results. I suppose one could do this for Twitter using their custom search API, but even then I am not sure how accurately one could estimate the total number of matching tweets.
Can anyone suggest how to easily estimate the number of #ifihadglass posts on twitter and Google+?
December 18th, 2007, by Anupam Joshi, posted in Ebiquity, Google, Mobile Computing, Pervasive Computing, Wearable Computing
I recently bought a GPS (Garmin Mobile 10) that works with my WM5 Smartphone. In the process of trying to install the Garmin Mobile XT application (which was very problematic and a huge pain, but I digress ….), I ended up uninstalling Google Maps.
When I went toÂ download and reinstall it though, I noticed that they have a new beta feature (My Location) that shows you where you are. It can either use a GPS, or use cell tower information. Basically, it sees which cell tower your phone is signed up to (and what signals it is seeing from others), and uses this to estimate where you are to within a 1000 meters.
This isÂ interesting, because we did it the same way back when there used to be AMPS / CDPD and Palm IIIs and VsÂ with cellular modems. Our project was called Agents2Go, and we published a paper about this in the MCommerce workshop of Mobicom in 01. I remember that Muthu et al from AT&T had a similar paper in MobiDE that year as well.
The problem at that time was that there was no publicly accessible database of all cell tower locations. Also, we heard informally from at least one telco that while doing this for research was Ok, if anyone ever tried to make money from it they would want to be a part of the loop. I guess Google has found a way to work with the various telcos ? Or maybe in the interim cell tower ids and locations have been made public knowledge ?
Of course Google maps also works with GPS, except that it refuses to work with my Garmin. I’ve tried all the tricks that a search on Google will reveal (mainly, setting the serial port used by Bluetooth to talk to the GPS) , but to no avail
January 28th, 2006, by Tim Finin, posted in AI, Gadgets, Machine Learning, Mobile Computing, Wearable Computing
A group of UMBC students working with Professor Zary Segall have built a prototype music player that senses its user’s emotional state and level of activity and picks appropriate music. The prototype system uses BodyMedia’s SenseWear, which detects continuous data from the wearer’s skin and wirelessly transmits the data stream to the xpod prototype. The physiological data includes energy expenditure (calories burned), duration of physical activity, number of steps taken, and sleep/wake states. A neural network system is used to learn associations between these biometric parameters and the user’s preferences for music and the resulting model is then used to dynamically construct the xpod’s playlist. Read more about the xpod prototype in this recent paper:
XPod a human activity and emotion aware mobile music player, Sandor Dornbush, Kevin Fisher, Kyle McKay, Alex Prikhodko and Zary Segall.
January 16th, 2006, by Tim Finin, posted in GENERAL, Humor, Mobile Computing, RFID, Wearable Computing
January 10th, 2006, by Tim Finin, posted in Gadgets, Mobile Computing, Wearable Computing
XPOD is a prototype portable music player that can sense a user’s context — what she is doing, her level of activity, mood, etc. — and that to refine its playlist. The device monitors several external variables from a streaming version of the BodyMedia SenseWear to model the user’s context and predict the most appropriate music genre via a neural network.
November 27th, 2005, by Harry Chen, posted in Computing Research, GENERAL, Pervasive Computing, RFID, Technology, Technology Impact, Wearable Computing
Here is what a smart doorknob can do.
“When you approach the door and you’re carrying groceries, it opens and lets you in. This doorknob is so smart, it can let the dog out but it won’t let six dogs come back in.
It will take FedEx packages and automatically sign for you when you’re not there. If you’re standing by the door, and a phone call comes in, the doorknob can tell you that ‘you’ve got a phone call from your son that I think you should take.”
This smart doorknob is part of a MIT research project called “Internet of Things” (see IHT). An interesting thing about this system is that it relies on the extensive usage of RFID tags. When it comes to RFID technology, some people are very worried, and some others are very excited.
November 17th, 2005, by Tim Finin, posted in GENERAL, Mobile Computing, Pervasive Computing, RFID, Semantic Web, Wearable Computing
The Internet of Things is the seventh in the series of “ITU Internet Reports” published since 1997 by the UN’s International Telecommunication Union. The report will be available in mid November and include chapters on enabling technologies, the shaping of the market, emerging challenges and implications for the developing world, as well as comprehensive statistical tables covering over 200 economies. Here’s an AP story about today’s announcement at the World Summit on the Information Society  in Tunis.
Machines and objects to overtake humans on the Internet: ITU, AP, Nov 17
Machines will take over from humans as the biggest users of the Internet in a brave new world of electronic sensors, smart homes, and tags that track users’ movements and habits, the UN’s telecommunications agency predicted.
In a report entitled “Internet of Things”, the International Telecommunication Union (ITU) outlined the expected next stage in the technological revolution where humans, electronic devices, inanimate objects and databases are linked by a radically transformed Internet.
“It would seem that science fiction is slowly turning into science fact in an ‘Internet of Things’ based on ubiquitous network connectivity,” the report said Thursday, saying objects would take on human characteristics thanks to technological innovation.
August 11th, 2005, by Tim Finin, posted in Mobile Computing, Wearable Computing
IBM researchers have prototyped SoulPad, which uses an auto-configuring operating system along with a hibernated virtual machine on a USB disk to enable a user to suspend one’s personal computing state on one PC and resume it on another. The USB disk essentially carries the soul of the user’s PC while the host PCs provide environments where the soul can come alive. For more information, see this paper, which receive the best paper award at the 3rd International Conference on Mobile Systems, Applications, and Services:
Reincarnating PCs with Portable SoulPads, Ramon Caceres, Casey Carter, Chandra Narayanaswami, M. T. Raghunath, Proc of ACM/USENIX MobiSys 2005, pp. 65-78.
The ability to walk up to any computer, personalize it, and use it as one’s own has long been a goal of mobile computing research. We present SoulPad, a new approach based on carrying an auto-configuring operating system along with a suspended virtual machine on a small portable device. With this approach, the computer boots from the device and resumes the virtual machine, thus giving the user access to his personal environment, including previously running computations. SoulPad has minimal infrastructure requirements and is therefore applicable to a wide range of conditions, particularly in developing countries. We report our experience implementing SoulPad and using it on a variety of hardware configurations. We address challenges common to systems similar to SoulPad, and show that the SoulPad model has significant potential as a mobility solution.
June 28th, 2005, by Harry Chen, posted in GENERAL, Mobile Computing, Wearable Computing
Researchers in Singapore have developed a human version of the classic arcade game Pacman, superimposing the virtual 3D game world on to city streets and buildings.
Merging different technologies such as GPS, Bluetooth, virtual reality, wi-fi, infrared and sensing mechanisms, the augmented reality game allows gamers to play in a digitally-enhanced maze-like version of the real world.
It has been selected as one of the world’s top 100 high-impact and visionary technologies and will showcased at the Wired NextFest 2005 in Chicago, US, which runs from June 24 to 26.
March 21st, 2005, by Anand, posted in GENERAL, Mobile Computing, Pervasive Computing, Wearable Computing
IBM Zurich comes out with miniature data storage with data storage density of 1 TB per square inch –
“Given the rapidly increasing data volumes that are downloaded onto mobile devices such as cell phones and PDAs, there is a growing demand for suitable storage media with more and more capacity. At CeBIT, IBM for the first time shows the prototype of the MEMS*- assembly of a nanomechanical storage system known internally as the “millipede” project. Using revolutionary nanotechnology, scientists at the IBM Zurich Research Laboratory, Switzerland, have made it to the millionths of a millimeter range, achieving data storage densities of more than one terabit (1000 gigabit) per square inch, equivalent to storing the content of 25 DVDs on an area the size of a postage stamp.”
January 26th, 2005, by Anubhav, posted in Pervasive Computing, Security, Wearable Computing
This is another scary technology story…
Lexus cars may be vulnerable to viruses that infect them via mobile phones. Landcruiser 100 models LX470 and LS430 have been discovered with infected operating systems that transfer within a range of 15 feet.
You are currently browsing the archives for the Wearable Computing category.