UMBC ebiquity
Pervasive Computing

Archive for the 'Pervasive Computing' Category

Microsoft HoloLens: Was it imagined in the past?

January 27th, 2015, by Prajit Kumar Das, posted in Microsoft, Pervasive Computing, Privacy, Technology, Technology Impact, Wearable Computing

In this post we will talk about certain User Interface (UI) technological advances that we are observing at the moment. One such development was revealed in a recent media event conducted by Microsoft, where they announced the Microsoft HoloLens, a computing platform which achieves seamless connection between the digital and the physical world, quite similar to the experience referred to in certain movies in the past.

It is interesting to note that the design of the HoloLens device looks so similar to something we have seen before.

Even the vision of holographic computing and users interacting with such interfaces isn’t a new one. The 2002 movie “The first $20 million is always the hardest” was possibly the first time we saw how such a futuristic technology might look like.

How did we reach here? A brief discussion on UIs…

User interfaces have always been an important aspect of computers. In its early days computers had a monochromatic screen (or at-most a duo-chromatic screen). A user would type in commands into the screen and computers would execute said commands. Since the commands would be entered in a single or a series of lines, this interface was called the Command-Line Interface (CLI).

Command Line based UI

Such an interface was not particularly intuitive as you had to know the list of commands that would fulfill a certain task. Albeit a certain group of individuals i.e. geeks and some computer programmers, like me, prefer such an interface owing to its clean and distraction free nature. However, owing to the learning curve of CLIs, researchers at Stanford Research Institute and Xerox PARC research center invented a new User interface called the Graphical User Interface (GUI). There were a few variations of the GUIs for example the point and click type also known as WIMP (windows, icons, menus, pointer) UI created at the Xerox PARC research center and made popular by Apple through it’s Macintosh operating systems

Apple’s Macintosh UI

And also adopted by Microsoft in its Windows operating systems

Microsoft’s Windows UI

Some early versions even included a textual user interface with programs which had menus that could be parsed using a keyboard instead of a mouse.

Early textual menu based UI

Eventually new avenues were created for UI research. Continuing onwards from textual interfaces to the WIMP interfaces to the world wide web where objects on the web became entities accessible through a Uniform Resource Identifier (URI). Such an entity could possibly have Semantics associated with them too (as defined by Web 2.0). However, with the advent of mobile smart-phones we saw a completely different class of user interfaces. The touch-based user interfaces and its more evolved cousin the multi-touch systems which allowed gesture based interactions.

Touch and gesture based UI

This was the first time in computing history that humans were able to directly interact with an object on their device with their hands instead of using an input device. The experience was immersive but yet these objects had not entered into the real world. We were on precipice of a revolution in computing.

This revolution was the mainstream launch of Wearable Technology and Virtual/Augmented Reality and Optical Head Mounted Display devices with the creation of devices like the Oculus RiftGoogle Glass and EyeTap among others. These devices allowed voice inputs and created a virtual or an augmented reality world for it’s user. Microsoft too was working on gesture based interactions with the Kinect device and research in the Natural User Interface (NUI) field. Couple of interesting works worthy of taking a look from this revolution are listed below.

This talk by John Underkoffler demos a UI that we saw in the movie Minority Report. He talks about the spatial aspect of how humans interact with their world and how computers might be able to help us better if we could do the same with our computers.

Here Pranav Mistry, currently the Head of the Think Tank Team and Director of Research of Samsung Research America, speaks of SixthSense. A new paradigm in computing that allows interaction between the real world and the digital world. All these works were knocking on the doors of a computer as we saw in the 2002 movie mentioned earlier, a real life holographic computer. Enter Microsoft HoloLens!

What is Microsoft HoloLens?

Microsoft HoloLens

Microsoft HoloLens is an augmented reality computing platform. As per the review from this device has taken a step beyond current work by adding to the world around its user, virtual holograms, rather than putting the user in a completely virtual environment. This device has launched a new platform of software development, i.e. Holographic apps. As well as, the device has created a scope for hardware research and development, as it requires new components like the Holographic Processing Unit or HPU. Visualization and sharing of ideas and interaction with the real world can now be done as envisioned in the TED talk by Pranav Mistry. A more natural way of interacting with digital content as envisioned in the works above are a reality now. The device tracks its user’s movements in an environment. It detects what a person is looking at and transforms the visual field by overlaying 3D objects on top of that. 

What kind of applications can we expect to be developed for HoloLens?

When the touch UI became a reality developers had to change the way they worked on software. Direct object interactions as shown above had to be programmed into their applications. Apps for HoloLens would similarly need to handle use-cases of interactions involving voice commands and gesture recognition. The common ideas and their corresponding research implication that come to mind include:

  • Looking up a grocery list when you enter the grocery store (context aware)

    HoloLens Environment overlaid with lists

  • Recording important events automatically (context aware computing)
  • Recognizing people in a party (social media and privacy)
  • Taking down notes, writing emails using voice commands (natural language understanding)
  • Searching for “stuff” around us (nlp, data analytics, semantic web, context aware computing)
  • Playing 3D games (animation and graphics)

    HoloLens Environment overlaid with 3D Games

  • Making sure your battery doesn’t run out (systems, hardware)
  • Virtual work environments (systems) 

    Virtual Work Environments through HoloLens

  • Teaching virtual classrooms (systems)

Why or how could it fail?

Are there any obvious pitfalls that we are not thinking about? We can be rest assured that researchers are already looking at ways this venture can fail and for Microsoft’s own good we can be certain they have a list of ways they think this might go and if there are any flaws they are surely working on fixing them. However, as a researcher in the mobile field with a bit of experience with the Google Glass, we can try to list some of the possible pitfalls of a AR/VR device. The HoloLens being a tetherless, Augmented Virtual Reality (AVR) device could possibly suffer from some of these pitfalls too. The reader should understand that we are not claiming any of the following to be scientifically provable because these are merely empirical observations.

  • The first thing that worried us while using the Google Glass was that it would sometimes cause us headaches after using it for couple of hours. We have not researched the implications of using the device by any other person so this is and observation from experience. Therefore one concern could be regarding the health impact on a human being with prolonged usage of an AVR device.
  • The second thing that was noticed with the Google Glass was how that the device heated up fast. We know from experience that computers do get hot. For example when we play a game they get hot or we do a lot of complex computations they get hot. An AVR device which is being used for playing games will most probably get hot too. At least the Google Glass did after recording a video. Here we are concerned about the heat dissipation and its health impact on the user.
  • The third observation that we made was that the Google Glass, showed significant sluggishness when it tried to accomplish computation heavy tasks. Will the HoloLens device be able to keep up with all the computations needed for, say, playing a 3D game?
  • The fourth concern is regarding battery capacity. The HoloLens is advertised as a device with no wires, cords or tethers. Anyone who has used a smartphone ever knows the issues of the battery on the devices running out within a day or even half a day. Will the HoloLens be able to carry a charge for long or will it require constant charging?
  • The fifth concern that we had was regarding privacy. The Google Glass has faced quite a few privacy concerns because it can readily take pictures using a simple voice command or even a non-verbal command like a ‘wink’. We have worked on this issue as part of our research product FaceBlock. Will the HoloLens create such concerns as this device too has front facing cameras that are capturing a user’s environment and projecting an augmented virtual world to the user.

The above lists of possible issues and probable application areas are not exhaustive in anyway. There will be numerous other scenarios and ways we can work on this new computing platform. There will probably be a multitude of issues with such a new and revolutionary platform. However, the hybrid of augmented and virtual reality has just started taking small steps now. With invention of devices like the Microsoft HoloLens, Google Glass, Oculus Rift, EyeTap etc. we can look forward to an exciting period in the future of Computing for Augmented Virtual Reality.

AAAI-11 Workshop on Activity Context Representation: Techniques and Languages

March 14th, 2011, by Tim Finin, posted in Agents, AI, KR, Mobile Computing, Pervasive Computing, Semantic Web

Mobile devices and provide better services if then can model, recognize and adapt to their users' context.

Pervasive, context-aware computing technologies can significantly enhance and improve the coming generation of devices and applications for consumer electronics as well as devices for work places, schools and hospitals. Context-aware cognitive support requires activity and context information to be captured, reasoned with and shared across devices — efficiently, securely, adhering to privacy policies, and with multidevice interoperability.

The AAAI-11 conference will host a two-day workshop on Activity Context Representation: Techniques and Languages focused on techniques and systems to allow mobile devices model and recognize the activities and context of people and groups and then exploit those models to provide better services. The workshop will be held on August 7th and 8th in San Francisco as part of AAAI-11, the Twenty-Fifth Conference on Artificial Intelligence. Submission of research papers and position statements are due by 22 April 2011.

The workshop intends to lay the groundwork for techniques to represent context within activity models using a synthesis of HCI/CSCW and AI approaches to reduce demands on people, such as the cognitive load inherent in activity/context switching, and enhancing human and device performance. It will explore activity and context modeling issues of capture, representation, standardization and interoperability for creating context-aware and activity-based assistive cognition tools with topics including, but not limited to the following:

  • Activity modeling, representation, detection
  • Context representation within activities
  • Semantic activity reasoning, search
  • Security and privacy
  • Information integration from multiple sources, ontologies
  • Context capture

There are three intended end results of the workshop: (1) Develop two-three key themes for research with specific opportunities for collaborative work. (2) Create a core research group forming an international academic and industrial consortium to significantly augment existing standards/drafts/proposals and create fresh initiatives to enable capture, transfer, and recall of activity context across multiple devices and platforms used by people individually and collectively. (3) Review and revise an initial draft of structure of an activity context exchange language (ACEL) including identification of use cases, domain-specific instantiations needed, and drafts of initial reasoning schemes and algorithms.

For more information, see the workshop call for papers.

Android to support near field communication

November 15th, 2010, by Tim Finin, posted in Google, Mobile Computing, Pervasive Computing, RFID

As TechCrunch and others report, Google’s Eric Schmidt announced that the next version of Android (Gingerbread 2.3) will support near field communication. What?

Wikipedia explains that NFC refers to RFID and RFID-like technology commonly used for contactless smart cards, mobile ticketing, and mobile payment systems.

Near Field Communication or NFC, is a short-range high frequency wireless communication technology which enables the exchange of data between devices over about a 10 centimeter (around 4 inches) distance.”

The next iphone is rumored to have something similar.

Support for NFC in popular smart phones could unleash lots of interesting applications, many of which have already been explored in research prototypes in labs around the world. One interesting possibility is that this could be used to allow android devices to share RDF queries and data with other devices.

Ratsimor PhD: Bartering for goods and services in pervasive environments

January 5th, 2009, by Tim Finin, posted in Agents, Mobile Computing, Pervasive Computing

We’ve been working to get the dissertaions of our recent PhD graduates online. The latest one is Olga Ratsimor’s 2007 dissertation on bartering for goods and services in a mobile or pervasive environments. Here is the citation and abstract. You can click through on the title to get a pdf copy of the dissertation.

Olga Vladi Ratsimor, Opportunistic Bartering of Digital Goods and Services in Pervasive Environments, Ph.D. Dissertation in Computer Science, University of Maryland, Baltimore County, August 2007.

The vision of mobile personal devices querying peers in their environment for information such as local restaurant recommendations or directions to the closest gas station, or traffic and weather updates has long been a goal of the pervasive research community. However, considering the diversity and the personal nature of devices participating in pervasive environments it is not feasible to assume that these interactions and collaborations will take place with out economically-driven motivating incentives.

This dissertation presents a novel bartering communication model that provides an underlying framework for incentives for collaborations in mobile pervasive environments by supporting opportunistic serendipitous peer-to-peer bartering for digital goods such as ring tones, MP3’s and podcasts.

To demonstrate viability and advantages of this innovative bartering approach, we compare and contrast the performances of two conventional, frequently employed, peer-to-peer interaction approaches namely Altruists and FreeRiders against two collaborative strategies that employ the Double Coincidence of Wants paradigm from the domain of barter exchanges. In particular, we present our communication framework that represents these collaborative strategies through a set of interaction policies that reflect these strategies. Furthermore, we present a set of results from our in-depth simulation studies that compare these strategies.

We examine the operation of the nodes employing our framework and executing these four distinct strategies and specifically, we compare the performances of the nodes executing these strategies in homogeneous and heterogeneous networks of mobile devices. We also examine the effects of adding InfoStations to these networks. For each of the strategies, we observe levels of gains and losses that nodes experience as result of collaborative digital good exchanges. We also evaluate communication overhead that nodes incur while looking for possible collaborative exchange. Furthermore, this dissertation offers an in-depth study of the swarm-like inter-strategy dynamics in heterogeneous networks populated with diverse nodes displaying varying levels of collaborative interaction attitudes. Further, the bartering framework is extended by incorporating value-sensitive bartering models that incorporate digital goods and content valuations into the bartering exchange process. In addition, the bartering model is extended by integration of socially influenced collaborative interaction that exploit role based social relationships between mobile peers that populate dynamic mobile environments.

Taken as a whole, the novel research work presented in this dissertation offers the first comprehensive effort that employs and models opportunistic bartering-based collaborative methodology in the context of serendipitous encounters in dynamic mobile peer-to-peer pervasive environments where mobile entities negotiate and exchange digital goods and content.

New US RFID pass card raises privacy and security concerns

January 1st, 2008, by Tim Finin, posted in GENERAL, Pervasive Computing, Privacy, RFID, Security

Today’s Washington Post has a story, Electronic Passports Raise Privacy Issues, on the new passport card that’s part of the DOS/DHS Western Hemisphere Travel Initiative. The program is controversial since the cards use “vicinity read” radio frequency identification (RFID) technology that can be read from a distance of 20 or even 40 feet. This is in contrast to the ‘proximity read’ RFID tags in new US passports that require that the reader be within inches. The cards will be available to US citizens to speed their processing as they cross the borders in North America.

“The goal of the passport card, an alternative to the traditional passport, is to reduce the wait at land and sea border checkpoints by using an electronic device that can simultaneously read multiple cards’ radio frequency identification (RFID) signals from a distance, checking travelers against terrorist and criminal watchlists while they wait. “As people are approaching a port of inspection, they can show the card to the reader, and by the time they get to the inspector, all the information will have been verified and they can be waved on through,” said Ann Barrett, deputy assistant secretary of state for passport services, commenting on the final rule on passport cards published yesterday in the Federal Register. src

As described in the ruling published in the Federal Register, the Government feels that privacy concerns have been addressed.

“The government said that to protect the data against copying or theft, the chip will contain a unique identifying number linked to information in a secure government database but not to names, Social Security numbers or other personal information. It will also come with a protective sleeve to guard against hackers trying to skim data wirelessly, Barrett said.” src

Of course, if you carry the card in your purse or wallet, your movements can still be tracked by the unique ID on the card. There are also security concerns since the tag’s ID may be cloned.

“Randy Vanderhoof, executive director of the Smart Card Alliance, represents technology firms that make another kind of RFID chip, one that can only be read up close, and he is critical of the passport card’s technology. It offers no way to check whether the card is valid or a duplicate, he said, so a hacker could alter the number on the chip using the same techniques used in cloning. “Because there’s no security in the numbering system, a person who obtains a passport card and is later placed on a watchlist could easily alter the number on the passport card to someone else’s who’s not on the watchlist,” Vanderhoof said.” src

Google Maps adds location Information

December 18th, 2007, by Anupam Joshi, posted in Ebiquity, Google, Mobile Computing, Pervasive Computing, Wearable Computing

I recently bought a GPS (Garmin Mobile 10) that works with my WM5 Smartphone. In the process of trying to install the Garmin Mobile XT application (which was very problematic and a huge pain, but I digress ….), I ended up uninstalling Google Maps.

When I went to download and reinstall it though, I noticed that they have a new beta feature (My Location) that shows you where you are. It can either use a GPS, or use cell tower information. Basically, it sees which cell tower your phone is signed up to (and what signals it is seeing from others), and uses this to estimate where you are to within a 1000 meters.

This is interesting, because we did it the same way back when there used to be AMPS / CDPD and Palm IIIs and Vs with cellular modems. Our project was called Agents2Go, and we published a paper about this in the MCommerce workshop of Mobicom in 01. I remember that Muthu et al from AT&T had a similar paper in MobiDE that year as well.

The problem at that time was that there was no publicly accessible database of all cell tower locations. Also, we heard informally from at least one telco that while doing this for research was Ok, if anyone ever tried to make money from it they would want to be a part of the loop. I guess Google has found a way to work with the various telcos ? Or maybe in the interim cell tower ids and locations have been made public knowledge ?

Of course Google maps also works with GPS, except that it refuses to work with my Garmin. I’ve tried all the tricks that a search on Google will reveal (mainly, setting the serial port used by Bluetooth to talk to the GPS) , but to no avail :-(

FON to provide a P2P wifi sharing network

February 8th, 2006, by Tim Finin, posted in Mobile Computing, Pervasive Computing

FON (Wikipedia article) is “a global community of people who share WiFi.” The idea is intriguing and has potential, so much so that the Madrid-based startup behind just raised $22M from investors that include Google, Skype and eBay. Here’s how it is supposed to work.

“In order to become a Fonero, you go to FON, to download software that you install in your router, you place your antenna by a window and you share bandwidth with other Foneros from anywhere in the world. You can also buy the FON Ready router from our web site, plug and play. FON creates a free WiFi roaming environment for those who contribute WiFi signals, namely those who have already signed up with a local ISP and downloaded our software into their WiFi routers.”

FON currently provides software for the Linksys WRT54G/GL/GS routers. Since launching three months ago, they have added 3,000 Foneros to the network, but US coverage is still quite sparse (and nothing in the Baltimore DC area!).

Like all VC funded startups, there has to be a business plan, so what is it? If you are not a Fonero you pay to use a hotspot, probably with some kind of prepaid scheme like Skype’s. Foneros will come in two varieties: Linus’s who benefit by getting free access via any FON node and Bill’s, who don’t get free access but do get half of the payment for the users who go through their routers.

It remains to be seen how ISPs will react to this if it catches on. Most ISPs prohibit bandwidth sharing in their service agreement. Speakeasy is the only ISP who is listed as welcoming FON.

FIPA’s P2P Nomadic Agent standards

February 5th, 2006, by Tim Finin, posted in Agents, AI, Mobile Computing, Pervasive Computing

FIPA is an IEEE Computer Society standards organization that promotes agent-based technology and the interoperability of its standards with other technologies. Jim Odell reports that FIPA’s P2P Nomadic Agent Working Group has released a draft of its specification. The group describes it’s focus as:

“The objective is to define a specification for P2P Nomadic Agents, capable of running on small or embedded devices, and to support distributed implementation of applications for consumer devices, cellular communications and robots, etc. over a pure P2P network. This specification will leverage presence and search mechanisms of underlying P2P infrastructures such as JXTA, Chord, Bluetooth, etc. In addition, this working group will propose the minimal required modifications of existing FIPA specifications to extend their reach to P2P Nomadic Agents. Potential application fields for P2P Nomadic Agents are healthcare, industry, offices, home, entertainment, transport/traffic.”

There is also a document from the Review of FIPA Specification Study Group that reviews and critiques the current inventory of 25 specifications.

Bluetooth spy rocks replace pumpkins

January 28th, 2006, by Tim Finin, posted in Gadgets, Humor, Mobile Computing, Pervasive Computing

Anand mentioned the (alleged) British spy rock as a good example of an advance that pervasive computing technology has wrought.

Russia’s state security service has accused British diplomats of spying in Moscow using electronic rocks. It’s an obvious hack, when you think about it — a bluetooth enabled PDA in a hollowed out rock could be used to drop off or pickup heavily encrypted documents from spys as they stroll by. The only problem would be power. Such a bluetooth rock would be much better than Alger Hiss’s pumpkin patch.

In an infamous spy case from the early days of the cold war, US State Department official Alger Hiss was accused (by a young Richard Nixon!) of passing documents via rolls of microfilm secreted in a hollowed-out pumpkin on his Maryland farm. But, technology marches on, with wireless rocks replacing pumpkins.

The March of Progress
In 1948 Alger Hiss was accused of transferring secrets using microfilm in a hollowed out pumpkin.
In 2006 the British were accused of transferring secrets using a wireless enabled PDA in a hollowed out rock.
cost: low
encryption: no
durability: low
models: Jack-o’-lantern, squash
vulnerable to: rodents, fungus, kids
pluses: organic, biodegradable
negatives: decay, rot
cost: medium
encryption: yes
durability: high
models: igneous, sedimentary
vulnerable to: bluejacking, spyware
pluses: tetris, plays mp3s
negatives: heavy

Smart Car Knows How to Park Itself and More

December 25th, 2005, by Harry Chen, posted in AI, Pervasive Computing, Technology

German engineers are working on a new smart car that knows how to find empty parking spaces and park itself.

Parkmate, which is expected to be available from 2008, is part of a battery of technology being developed by Siemens VDO, one of the world’s major suppliers of in-car electronics.

Smart doorknob: an exciting RFID application

November 27th, 2005, by Harry Chen, posted in Computing Research, GENERAL, Pervasive Computing, RFID, Technology, Technology Impact, Wearable Computing

Here is what a smart doorknob can do.

“When you approach the door and you’re carrying groceries, it opens and lets you in. This doorknob is so smart, it can let the dog out but it won’t let six dogs come back in.

It will take FedEx packages and automatically sign for you when you’re not there. If you’re standing by the door, and a phone call comes in, the doorknob can tell you that ‘you’ve got a phone call from your son that I think you should take.”

This smart doorknob is part of a MIT research project called “Internet of Things” (see IHT). An interesting thing about this system is that it relies on the extensive usage of RFID tags. When it comes to RFID technology, some people are very worried, and some others are very excited.

UN foresees an Internet of things

November 17th, 2005, by Tim Finin, posted in GENERAL, Mobile Computing, Pervasive Computing, RFID, Semantic Web, Wearable Computing

The Internet of Things is the seventh in the series of “ITU Internet Reports” published since 1997 by the UN’s International Telecommunication Union. The report will be available in mid November and include chapters on enabling technologies, the shaping of the market, emerging challenges and implications for the developing world, as well as comprehensive statistical tables covering over 200 economies. Here’s an AP story about today’s announcement at the World Summit on the Information Society [2] in Tunis.

Machines and objects to overtake humans on the Internet: ITU, AP, Nov 17

Machines will take over from humans as the biggest users of the Internet in a brave new world of electronic sensors, smart homes, and tags that track users’ movements and habits, the UN’s telecommunications agency predicted.

In a report entitled “Internet of Things”, the International Telecommunication Union (ITU) outlined the expected next stage in the technological revolution where humans, electronic devices, inanimate objects and databases are linked by a radically transformed Internet.

“It would seem that science fiction is slowly turning into science fact in an ‘Internet of Things’ based on ubiquitous network connectivity,” the report said Thursday, saying objects would take on human characteristics thanks to technological innovation.

You are currently browsing the archives for the Pervasive Computing category.

  Home | Archive | Login | Feed

  • something went wrong