UMBC ebiquity
Wearable Computing

Archive for the 'Wearable Computing' Category

Microsoft HoloLens: Was it imagined in the past?

January 27th, 2015, by Prajit Kumar Das, posted in Microsoft, Pervasive Computing, Privacy, Technology, Technology Impact, Wearable Computing

In this post we will talk about certain User Interface (UI) technological advances that we are observing at the moment. One such development was revealed in a recent media event conducted by Microsoft, where they announced the Microsoft HoloLens, a computing platform which achieves seamless connection between the digital and the physical world, quite similar to the experience referred to in certain movies in the past.

It is interesting to note that the design of the HoloLens device looks so similar to something we have seen before.

Even the vision of holographic computing and users interacting with such interfaces isn’t a new one. The 2002 movie “The first $20 million is always the hardest” was possibly the first time we saw how such a futuristic technology might look like.

How did we reach here? A brief discussion on UIs…

User interfaces have always been an important aspect of computers. In its early days computers had a monochromatic screen (or at-most a duo-chromatic screen). A user would type in commands into the screen and computers would execute said commands. Since the commands would be entered in a single or a series of lines, this interface was called the Command-Line Interface (CLI).

Command Line based UI

Such an interface was not particularly intuitive as you had to know the list of commands that would fulfill a certain task. Albeit a certain group of individuals i.e. geeks and some computer programmers, like me, prefer such an interface owing to its clean and distraction free nature. However, owing to the learning curve of CLIs, researchers at Stanford Research Institute and Xerox PARC research center invented a new User interface called the Graphical User Interface (GUI). There were a few variations of the GUIs for example the point and click type also known as WIMP (windows, icons, menus, pointer) UI created at the Xerox PARC research center and made popular by Apple through it’s Macintosh operating systems

Apple’s Macintosh UI

And also adopted by Microsoft in its Windows operating systems

Microsoft’s Windows UI

Some early versions even included a textual user interface with programs which had menus that could be parsed using a keyboard instead of a mouse.

Early textual menu based UI

Eventually new avenues were created for UI research. Continuing onwards from textual interfaces to the WIMP interfaces to the world wide web where objects on the web became entities accessible through a Uniform Resource Identifier (URI). Such an entity could possibly have Semantics associated with them too (as defined by Web 2.0). However, with the advent of mobile smart-phones we saw a completely different class of user interfaces. The touch-based user interfaces and its more evolved cousin the multi-touch systems which allowed gesture based interactions.

Touch and gesture based UI

This was the first time in computing history that humans were able to directly interact with an object on their device with their hands instead of using an input device. The experience was immersive but yet these objects had not entered into the real world. We were on precipice of a revolution in computing.

This revolution was the mainstream launch of Wearable Technology and Virtual/Augmented Reality and Optical Head Mounted Display devices with the creation of devices like the Oculus RiftGoogle Glass and EyeTap among others. These devices allowed voice inputs and created a virtual or an augmented reality world for it’s user. Microsoft too was working on gesture based interactions with the Kinect device and research in the Natural User Interface (NUI) field. Couple of interesting works worthy of taking a look from this revolution are listed below.

This talk by John Underkoffler demos a UI that we saw in the movie Minority Report. He talks about the spatial aspect of how humans interact with their world and how computers might be able to help us better if we could do the same with our computers.

Here Pranav Mistry, currently the Head of the Think Tank Team and Director of Research of Samsung Research America, speaks of SixthSense. A new paradigm in computing that allows interaction between the real world and the digital world. All these works were knocking on the doors of a computer as we saw in the 2002 movie mentioned earlier, a real life holographic computer. Enter Microsoft HoloLens!

What is Microsoft HoloLens?

Microsoft HoloLens

Microsoft HoloLens is an augmented reality computing platform. As per the review from Forbes.com this device has taken a step beyond current work by adding to the world around its user, virtual holograms, rather than putting the user in a completely virtual environment. This device has launched a new platform of software development, i.e. Holographic apps. As well as, the device has created a scope for hardware research and development, as it requires new components like the Holographic Processing Unit or HPU. Visualization and sharing of ideas and interaction with the real world can now be done as envisioned in the TED talk by Pranav Mistry. A more natural way of interacting with digital content as envisioned in the works above are a reality now. The device tracks its user’s movements in an environment. It detects what a person is looking at and transforms the visual field by overlaying 3D objects on top of that. 

What kind of applications can we expect to be developed for HoloLens?

When the touch UI became a reality developers had to change the way they worked on software. Direct object interactions as shown above had to be programmed into their applications. Apps for HoloLens would similarly need to handle use-cases of interactions involving voice commands and gesture recognition. The common ideas and their corresponding research implication that come to mind include:

  • Looking up a grocery list when you enter the grocery store (context aware)

    HoloLens Environment overlaid with lists

  • Recording important events automatically (context aware computing)
  • Recognizing people in a party (social media and privacy)
  • Taking down notes, writing emails using voice commands (natural language understanding)
  • Searching for “stuff” around us (nlp, data analytics, semantic web, context aware computing)
  • Playing 3D games (animation and graphics)

    HoloLens Environment overlaid with 3D Games

  • Making sure your battery doesn’t run out (systems, hardware)
  • Virtual work environments (systems) 

    Virtual Work Environments through HoloLens

  • Teaching virtual classrooms (systems)

Why or how could it fail?

Are there any obvious pitfalls that we are not thinking about? We can be rest assured that researchers are already looking at ways this venture can fail and for Microsoft’s own good we can be certain they have a list of ways they think this might go and if there are any flaws they are surely working on fixing them. However, as a researcher in the mobile field with a bit of experience with the Google Glass, we can try to list some of the possible pitfalls of a AR/VR device. The HoloLens being a tetherless, Augmented Virtual Reality (AVR) device could possibly suffer from some of these pitfalls too. The reader should understand that we are not claiming any of the following to be scientifically provable because these are merely empirical observations.

  • The first thing that worried us while using the Google Glass was that it would sometimes cause us headaches after using it for couple of hours. We have not researched the implications of using the device by any other person so this is and observation from experience. Therefore one concern could be regarding the health impact on a human being with prolonged usage of an AVR device.
  • The second thing that was noticed with the Google Glass was how that the device heated up fast. We know from experience that computers do get hot. For example when we play a game they get hot or we do a lot of complex computations they get hot. An AVR device which is being used for playing games will most probably get hot too. At least the Google Glass did after recording a video. Here we are concerned about the heat dissipation and its health impact on the user.
  • The third observation that we made was that the Google Glass, showed significant sluggishness when it tried to accomplish computation heavy tasks. Will the HoloLens device be able to keep up with all the computations needed for, say, playing a 3D game?
  • The fourth concern is regarding battery capacity. The HoloLens is advertised as a device with no wires, cords or tethers. Anyone who has used a smartphone ever knows the issues of the battery on the devices running out within a day or even half a day. Will the HoloLens be able to carry a charge for long or will it require constant charging?
  • The fifth concern that we had was regarding privacy. The Google Glass has faced quite a few privacy concerns because it can readily take pictures using a simple voice command or even a non-verbal command like a ‘wink’. We have worked on this issue as part of our research product FaceBlock. Will the HoloLens create such concerns as this device too has front facing cameras that are capturing a user’s environment and projecting an augmented virtual world to the user.

The above lists of possible issues and probable application areas are not exhaustive in anyway. There will be numerous other scenarios and ways we can work on this new computing platform. There will probably be a multitude of issues with such a new and revolutionary platform. However, the hybrid of augmented and virtual reality has just started taking small steps now. With invention of devices like the Microsoft HoloLens, Google Glass, Oculus Rift, EyeTap etc. we can look forward to an exciting period in the future of Computing for Augmented Virtual Reality.

Rafiki: A Semantic and Collaborative Approach to Community Health-Care in Underserved Areas

September 19th, 2014, by Tim Finin, posted in Mobile Computing, OWL, RDF, Semantic Web, Wearable Computing

rafike500

Primal Pappachan, Roberto Yus, Anupam Joshi and Tim Finin, Rafiki: A Semantic and Collaborative Approach to Community Health-Care in Underserved Areas, 10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, 22-15 October2014, Miami.

Community Health Workers (CHWs) act as liaisons between health-care providers and patients in underserved or un-served areas. However, the lack of information sharing and training support impedes the effectiveness of CHWs and their ability to correctly diagnose patients. In this paper, we propose and describe a system for mobile and wearable computing devices called Rafiki which assists CHWs in decision making and facilitates collaboration among them. Rafiki can infer possible diseases and treatments by representing the diseases, their symptoms, and patient context in OWL ontologies and by reasoning over this model. The use of semantic representation of data makes it easier to share knowledge related to disease, symptom, diagnosis guidelines, and patient demography, between various personnel involved in health-care (e.g., CHWs, patients, health-care providers). We describe the Rafiki system with the help of a motivating community health-care scenario and present an Android prototype for smart phones and Google Glass.

Do not be a Gl***hole, use Face-Block.me!

March 27th, 2014, by Prajit Kumar Das, posted in Ebiquity, Google, Mobile Computing, Policy, Semantic Web, Social, Wearable Computing

If you are a Google Glass user, you might have been greeted with concerned looks or raised eyebrows at public places. There has been a lot of chatter in the “interweb” regarding the loss of privacy that results from people taking your pictures with Glass without notice. Google Glass has simplified photography but as what happens with revolutionary technology people are worried about the potential misuse.

FaceBlock helps to protect the privacy of people around you by allowing them to specify whether or not to be included in your pictures. This new application developed by the joint collaboration between researchers from the Ebiquity Research Group at University of Maryland, Baltimore County and Distributed Information Systems (DIS) at University of Zaragoza (Spain), selectively obscures the face of the people in pictures taken by Google Glass.

Comfort at the cost of Privacy?

As the saying goes, “The best camera is the one that’s with you”. Google Glass suits this description as it is always available and can take a picture with a simple voice command (“Okay Glass, take a picture”). This allows users to capture spontaneous life moments effortlessly. On the flip side, this raises significant privacy concerns as pictures can taken without one’s consent. If one does not use this device responsibly, one risks being labelled a “Glasshole”. Quite recently, a Google Glass user was assaulted by the patrons who objected against her wearing the device inside the bar. The list of establishments which has banned Google Glass within their premises is growing day by day. The dos and donts for Glass users released by Google is a good first step but it doesn’t solve the problem of privacy violation.

FaceBlock_Image_Google_Glass

Privacy-Aware pictures to the rescue

FaceBlock takes regular pictures taken by your smartphone or Google Glass as input and converts it into privacy-aware pictures. This output is generated by using a combination of Face Detection and Face Recognition algorithms. By using FaceBlock, a user can take a picture of herself and specify her policy/rule regarding pictures taken by others (in this case ‘obscure my face in pictures from strangers’). The application would automatically generate a face identifier for this picture. The identifier is a mathematical representation of the image. To learn more about the working on FaceBlock, you should watch the following video.

Using Bluetooth, FaceBlock can automatically detect and share this policy with Glass users near by. After receiving this face identifier from a nearby user, the following post processing steps happen on Glass as shown in the images.

FaceBlock_Image_Eigen_UncheckFaceBlock_Image_Eigen_CheckFaceBlock_Image_Blur

What promises does it hold?

FaceBlock is a proof of concept implementation of a system that can create privacy-aware pictures using smart devices. The pervasiveness of privacy-aware pictures could be a right step towards balancing privacy needs and comfort afforded by technology. Thus, we can get the best out of Wearable Technology without being oblivious about the privacy of those around you.

FaceBlock is part of the efforts of Ebiquity and SID in building systems for preserving user privacy on mobile devices. For more details, visit http://face-block.me

How many #ifihadglass posts were there?

February 28th, 2013, by Tim Finin, posted in Gadgets, Mobile Computing, Wearable Computing

UMBC CSEE department members submitted a number of #ifihadglass posts hoping to get an invitation to pre-order a Google Glass device. Several came from the UMBC Ebiquity Lab including this one that builds on our work with context-aware mobile phones.

Reports are that as many as 8,000 of the submitted ideas will be invited to the first round of pre-orders. To get a rough idea of our odds, I tried using Google and Bing searches to estimate the number of submissions. A general search for pages with the #ifihadglass tag returned 249K hits on Google. Of these 21K were from twitter and less than 4K from Google+. I’m not sure which of the twitter and Google+ posts get indexed and how long it takes, but I do know that our entry above did not show up in the results. Bing reported 171K results for a search on the hash tag, but our post was not among them. I tried the native search services on both Twitter and Google+, but these are oriented toward delivering a stream of new results and neither gives an estimate of the total number of results. I suppose one could do this for Twitter using their custom search API, but even then I am not sure how accurately one could estimate the total number of matching tweets.

Can anyone suggest how to easily estimate the number of #ifihadglass posts on twitter and Google+?

Google Maps adds location Information

December 18th, 2007, by Anupam Joshi, posted in Ebiquity, Google, Mobile Computing, Pervasive Computing, Wearable Computing

I recently bought a GPS (Garmin Mobile 10) that works with my WM5 Smartphone. In the process of trying to install the Garmin Mobile XT application (which was very problematic and a huge pain, but I digress ….), I ended up uninstalling Google Maps.

When I went to download and reinstall it though, I noticed that they have a new beta feature (My Location) that shows you where you are. It can either use a GPS, or use cell tower information. Basically, it sees which cell tower your phone is signed up to (and what signals it is seeing from others), and uses this to estimate where you are to within a 1000 meters.

This is interesting, because we did it the same way back when there used to be AMPS / CDPD and Palm IIIs and Vs with cellular modems. Our project was called Agents2Go, and we published a paper about this in the MCommerce workshop of Mobicom in 01. I remember that Muthu et al from AT&T had a similar paper in MobiDE that year as well.

The problem at that time was that there was no publicly accessible database of all cell tower locations. Also, we heard informally from at least one telco that while doing this for research was Ok, if anyone ever tried to make money from it they would want to be a part of the loop. I guess Google has found a way to work with the various telcos ? Or maybe in the interim cell tower ids and locations have been made public knowledge ?

Of course Google maps also works with GPS, except that it refuses to work with my Garmin. I’ve tried all the tricks that a search on Google will reveal (mainly, setting the serial port used by Bluetooth to talk to the GPS) , but to no avail :-(

xpod senses what music you’d like to hear

January 28th, 2006, by Tim Finin, posted in AI, Gadgets, Machine Learning, Mobile Computing, Wearable Computing

A group of UMBC students working with Professor Zary Segall have built a prototype music player that senses its user’s emotional state and level of activity and picks appropriate music. The prototype system uses BodyMedia’s SenseWear, which detects continuous data from the wearer’s skin and wirelessly transmits the data stream to the xpod prototype. The physiological data includes energy expenditure (calories burned), duration of physical activity, number of steps taken, and sleep/wake states. A neural network system is used to learn associations between these biometric parameters and the user’s preferences for music and the resulting model is then used to dynamically construct the xpod’s playlist. Read more about the xpod prototype in this recent paper:

XPod a human activity and emotion aware mobile music player, Sandor Dornbush, Kevin Fisher, Kyle McKay, Alex Prikhodko and Zary Segall.

Gimme that RFID impant

January 16th, 2006, by Tim Finin, posted in GENERAL, Humor, Mobile Computing, RFID, Wearable Computing

Context aware ipod knows what to play

January 10th, 2006, by Tim Finin, posted in Gadgets, Mobile Computing, Wearable Computing

XPOD is a prototype portable music player that can sense a user’s context — what she is doing, her level of activity, mood, etc. — and that to refine its playlist. The device monitors several external variables from a streaming version of the BodyMedia SenseWear to model the user’s context and predict the most appropriate music genre via a neural network.

Smart doorknob: an exciting RFID application

November 27th, 2005, by Harry Chen, posted in Computing Research, GENERAL, Pervasive Computing, RFID, Technology, Technology Impact, Wearable Computing

Here is what a smart doorknob can do.

“When you approach the door and you’re carrying groceries, it opens and lets you in. This doorknob is so smart, it can let the dog out but it won’t let six dogs come back in.

It will take FedEx packages and automatically sign for you when you’re not there. If you’re standing by the door, and a phone call comes in, the doorknob can tell you that ‘you’ve got a phone call from your son that I think you should take.”

This smart doorknob is part of a MIT research project called “Internet of Things” (see IHT). An interesting thing about this system is that it relies on the extensive usage of RFID tags. When it comes to RFID technology, some people are very worried, and some others are very excited.

UN foresees an Internet of things

November 17th, 2005, by Tim Finin, posted in GENERAL, Mobile Computing, Pervasive Computing, RFID, Semantic Web, Wearable Computing

The Internet of Things is the seventh in the series of “ITU Internet Reports” published since 1997 by the UN’s International Telecommunication Union. The report will be available in mid November and include chapters on enabling technologies, the shaping of the market, emerging challenges and implications for the developing world, as well as comprehensive statistical tables covering over 200 economies. Here’s an AP story about today’s announcement at the World Summit on the Information Society [2] in Tunis.

Machines and objects to overtake humans on the Internet: ITU, AP, Nov 17

Machines will take over from humans as the biggest users of the Internet in a brave new world of electronic sensors, smart homes, and tags that track users’ movements and habits, the UN’s telecommunications agency predicted.

In a report entitled “Internet of Things”, the International Telecommunication Union (ITU) outlined the expected next stage in the technological revolution where humans, electronic devices, inanimate objects and databases are linked by a radically transformed Internet.

“It would seem that science fiction is slowly turning into science fact in an ‘Internet of Things’ based on ubiquitous network connectivity,” the report said Thursday, saying objects would take on human characteristics thanks to technological innovation.
more

Computer souls and reincarnation

August 11th, 2005, by Tim Finin, posted in Mobile Computing, Wearable Computing

IBM researchers have prototyped SoulPad, which uses an auto-configuring operating system along with a hibernated virtual machine on a USB disk to enable a user to suspend one’s personal computing state on one PC and resume it on another. The USB disk essentially carries the soul of the user’s PC while the host PCs provide environments where the soul can come alive. For more information, see this paper, which receive the best paper award at the 3rd International Conference on Mobile Systems, Applications, and Services:

Reincarnating PCs with Portable SoulPads, Ramon Caceres, Casey Carter, Chandra Narayanaswami, M. T. Raghunath, Proc of ACM/USENIX MobiSys 2005, pp. 65-78.

The ability to walk up to any computer, personalize it, and use it as one’s own has long been a goal of mobile computing research. We present SoulPad, a new approach based on carrying an auto-configuring operating system along with a suspended virtual machine on a small portable device. With this approach, the computer boots from the device and resumes the virtual machine, thus giving the user access to his personal environment, including previously running computations. SoulPad has minimal infrastructure requirements and is therefore applicable to a wide range of conditions, particularly in developing countries. We report our experience implementing SoulPad and using it on a variety of hardware configurations. We address challenges common to systems similar to SoulPad, and show that the SoulPad model has significant potential as a mobility solution.

Pacman comes to life virtually

June 28th, 2005, by Harry Chen, posted in GENERAL, Mobile Computing, Wearable Computing

Researchers in Singapore have developed a human version of the classic arcade game Pacman, superimposing the virtual 3D game world on to city streets and buildings.

Merging different technologies such as GPS, Bluetooth, virtual reality, wi-fi, infrared and sensing mechanisms, the augmented reality game allows gamers to play in a digitally-enhanced maze-like version of the real world.

It has been selected as one of the world’s top 100 high-impact and visionary technologies and will showcased at the Wired NextFest 2005 in Chicago, US, which runs from June 24 to 26.

You are currently browsing the archives for the Wearable Computing category.

  Home | Archive | Login | Feed