UMBC ebiquity
Technology Impact

Archive for the 'Technology Impact' Category

Microsoft HoloLens: Was it imagined in the past?

January 27th, 2015, by Prajit Kumar Das, posted in Microsoft, Pervasive Computing, Privacy, Technology, Technology Impact, Wearable Computing

In this post we will talk about certain User Interface (UI) technological advances that we are observing at the moment. One such development was revealed in a recent media event conducted by Microsoft, where they announced the Microsoft HoloLens, a computing platform which achieves seamless connection between the digital and the physical world, quite similar to the experience referred to in certain movies in the past.

It is interesting to note that the design of the HoloLens device looks so similar to something we have seen before.

Even the vision of holographic computing and users interacting with such interfaces isn’t a new one. The 2002 movie “The first $20 million is always the hardest” was possibly the first time we saw how such a futuristic technology might look like.

How did we reach here? A brief discussion on UIs…

User interfaces have always been an important aspect of computers. In its early days computers had a monochromatic screen (or at-most a duo-chromatic screen). A user would type in commands into the screen and computers would execute said commands. Since the commands would be entered in a single or a series of lines, this interface was called the Command-Line Interface (CLI).

Command Line based UI

Such an interface was not particularly intuitive as you had to know the list of commands that would fulfill a certain task. Albeit a certain group of individuals i.e. geeks and some computer programmers, like me, prefer such an interface owing to its clean and distraction free nature. However, owing to the learning curve of CLIs, researchers at Stanford Research Institute and Xerox PARC research center invented a new User interface called the Graphical User Interface (GUI). There were a few variations of the GUIs for example the point and click type also known as WIMP (windows, icons, menus, pointer) UI created at the Xerox PARC research center and made popular by Apple through it’s Macintosh operating systems

Apple’s Macintosh UI

And also adopted by Microsoft in its Windows operating systems

Microsoft’s Windows UI

Some early versions even included a textual user interface with programs which had menus that could be parsed using a keyboard instead of a mouse.

Early textual menu based UI

Eventually new avenues were created for UI research. Continuing onwards from textual interfaces to the WIMP interfaces to the world wide web where objects on the web became entities accessible through a Uniform Resource Identifier (URI). Such an entity could possibly have Semantics associated with them too (as defined by Web 2.0). However, with the advent of mobile smart-phones we saw a completely different class of user interfaces. The touch-based user interfaces and its more evolved cousin the multi-touch systems which allowed gesture based interactions.

Touch and gesture based UI

This was the first time in computing history that humans were able to directly interact with an object on their device with their hands instead of using an input device. The experience was immersive but yet these objects had not entered into the real world. We were on precipice of a revolution in computing.

This revolution was the mainstream launch of Wearable Technology and Virtual/Augmented Reality and Optical Head Mounted Display devices with the creation of devices like the Oculus RiftGoogle Glass and EyeTap among others. These devices allowed voice inputs and created a virtual or an augmented reality world for it’s user. Microsoft too was working on gesture based interactions with the Kinect device and research in the Natural User Interface (NUI) field. Couple of interesting works worthy of taking a look from this revolution are listed below.

This talk by John Underkoffler demos a UI that we saw in the movie Minority Report. He talks about the spatial aspect of how humans interact with their world and how computers might be able to help us better if we could do the same with our computers.

Here Pranav Mistry, currently the Head of the Think Tank Team and Director of Research of Samsung Research America, speaks of SixthSense. A new paradigm in computing that allows interaction between the real world and the digital world. All these works were knocking on the doors of a computer as we saw in the 2002 movie mentioned earlier, a real life holographic computer. Enter Microsoft HoloLens!

What is Microsoft HoloLens?

Microsoft HoloLens

Microsoft HoloLens is an augmented reality computing platform. As per the review from this device has taken a step beyond current work by adding to the world around its user, virtual holograms, rather than putting the user in a completely virtual environment. This device has launched a new platform of software development, i.e. Holographic apps. As well as, the device has created a scope for hardware research and development, as it requires new components like the Holographic Processing Unit or HPU. Visualization and sharing of ideas and interaction with the real world can now be done as envisioned in the TED talk by Pranav Mistry. A more natural way of interacting with digital content as envisioned in the works above are a reality now. The device tracks its user’s movements in an environment. It detects what a person is looking at and transforms the visual field by overlaying 3D objects on top of that. 

What kind of applications can we expect to be developed for HoloLens?

When the touch UI became a reality developers had to change the way they worked on software. Direct object interactions as shown above had to be programmed into their applications. Apps for HoloLens would similarly need to handle use-cases of interactions involving voice commands and gesture recognition. The common ideas and their corresponding research implication that come to mind include:

  • Looking up a grocery list when you enter the grocery store (context aware)

    HoloLens Environment overlaid with lists

  • Recording important events automatically (context aware computing)
  • Recognizing people in a party (social media and privacy)
  • Taking down notes, writing emails using voice commands (natural language understanding)
  • Searching for “stuff” around us (nlp, data analytics, semantic web, context aware computing)
  • Playing 3D games (animation and graphics)

    HoloLens Environment overlaid with 3D Games

  • Making sure your battery doesn’t run out (systems, hardware)
  • Virtual work environments (systems) 

    Virtual Work Environments through HoloLens

  • Teaching virtual classrooms (systems)

Why or how could it fail?

Are there any obvious pitfalls that we are not thinking about? We can be rest assured that researchers are already looking at ways this venture can fail and for Microsoft’s own good we can be certain they have a list of ways they think this might go and if there are any flaws they are surely working on fixing them. However, as a researcher in the mobile field with a bit of experience with the Google Glass, we can try to list some of the possible pitfalls of a AR/VR device. The HoloLens being a tetherless, Augmented Virtual Reality (AVR) device could possibly suffer from some of these pitfalls too. The reader should understand that we are not claiming any of the following to be scientifically provable because these are merely empirical observations.

  • The first thing that worried us while using the Google Glass was that it would sometimes cause us headaches after using it for couple of hours. We have not researched the implications of using the device by any other person so this is and observation from experience. Therefore one concern could be regarding the health impact on a human being with prolonged usage of an AVR device.
  • The second thing that was noticed with the Google Glass was how that the device heated up fast. We know from experience that computers do get hot. For example when we play a game they get hot or we do a lot of complex computations they get hot. An AVR device which is being used for playing games will most probably get hot too. At least the Google Glass did after recording a video. Here we are concerned about the heat dissipation and its health impact on the user.
  • The third observation that we made was that the Google Glass, showed significant sluggishness when it tried to accomplish computation heavy tasks. Will the HoloLens device be able to keep up with all the computations needed for, say, playing a 3D game?
  • The fourth concern is regarding battery capacity. The HoloLens is advertised as a device with no wires, cords or tethers. Anyone who has used a smartphone ever knows the issues of the battery on the devices running out within a day or even half a day. Will the HoloLens be able to carry a charge for long or will it require constant charging?
  • The fifth concern that we had was regarding privacy. The Google Glass has faced quite a few privacy concerns because it can readily take pictures using a simple voice command or even a non-verbal command like a ‘wink’. We have worked on this issue as part of our research product FaceBlock. Will the HoloLens create such concerns as this device too has front facing cameras that are capturing a user’s environment and projecting an augmented virtual world to the user.

The above lists of possible issues and probable application areas are not exhaustive in anyway. There will be numerous other scenarios and ways we can work on this new computing platform. There will probably be a multitude of issues with such a new and revolutionary platform. However, the hybrid of augmented and virtual reality has just started taking small steps now. With invention of devices like the Microsoft HoloLens, Google Glass, Oculus Rift, EyeTap etc. we can look forward to an exciting period in the future of Computing for Augmented Virtual Reality.

Researchers install PAC-MAN on Sequoia voting machine w/o breaking seals

August 23rd, 2010, by Tim Finin, posted in Games, Security, Social media, Technology Impact

Here’s a new one for the DIY movement.

Security researchers J. Alex Haldeman and Ariel Feldman demonstrated PAC-MAC running on a Sequoia voting machine last week at the EVT/WOTE Workshop held at the USENIX Security conference in DC.

Amazingly, they were able to install the game on a Sequoia AVC Edge touch-screen DRE (direct-recording electronic) voting machine without breaking the original tamper-evident seals.

Here’s how they describe what they did on Haldeman’s web site:

What is the Sequoia AVC Edge?

It’s a touch-screen DRE (direct-recording electronic) voting machine. Like all DREs, it stores votes in a computer memory. In 2008, the AVC Edge was used in 161 jurisdictions with almost 9 million registered voters, including large parts of Louisiana, Missouri, Nevada, and Virginia, according to Verified Voting.

What’s inside the AVC Edge?

It has a 486 SLE processor and 32 MB of RAM—similar specs to a 20-year-old PC. The election software is stored on an internal CompactFlash memory card. Modifying it is as simple as removing the card and inserting it into a PC.

Wouldn’t seals expose any tampering?

We received the machine with the original tamper-evident seals intact. The software can be replaced without breaking any of these seals, simply by removing screws and opening the case.

How did you reprogram the machine?

The original election software used the psOS+ embedded operating system. We reformatted the memory card to boot DOS instead. (Update: Yes, it can also run Linux.) Challenges included remembering how to write a config.sys file and getting software to run without logical block addressing or a math coprocessor. The entire process took three afternoons.”

You can find out more from the presentation slides from the EVT workshop, Practical AVC-Edge CompactFlash Modifications can Amuse Nerds. They sum up their study with the following conclusion.

“In conclusion, we feel our work represents the future of DREs. Now that we know how bad their security is, thousands of DREs will be decommissioned and sold by states over the next several years. Filling our landfills with these machines would be a terrible waste. Fortunately, they can be recycled as arcade machines, providing countless hours of amusement in the basements of the nations’ nerds.”

RAEng report on Social, legal and ethical issues of autonomous systems

August 21st, 2009, by Tim Finin, posted in Agents, AI, Semantic Web, Social media, Technology Impact

RAEng report on Social, legal and ethical issues of autonomous systems

The Royal Academy of Engineering has released a report on the social, legal and ethical issues involving autonomous systems — systems that are adaptive, learn and can make decisions without the intervention or supervision of a human.

The report, Autonomous Systems: Social, Legal and Ethical Issues (pdf), was based on a roundtable discussion “from a wide range of experts, looking at the areas where autonomous systems are most likely to emerge first, and discussing the broad ethical issues surrounding their uptake.”

While autonomous systems have broad applicability, the report focuses on two areas: transportation (e.g. autonomous road vehicles) and personal care (e.g., smart homes).

“Autonomous systems, such as fully robotic vehicles that are “driverless” or artificial companions that can provide practical and emotional support to isolated people, have a level of self-determination and decision making ability with the capacity to learn from past performance. Autonomous systems do not experience emotional reactions and can therefore perform better than humans in tasks that are dull, risky or stressful. However they bring with them a new set of ethical problems. What if unpredicted behaviour causes harm? If an unmanned vehicle is involved in an accident, who is responsible – the driver or the systems engineer? Autonomous vehicles could provide benefits for road transport with reduced congestion and safety improvements but there is a lack of a suitable legal framework to address issues such as insurance and driver responsibility.

The technologies for smart homes and patient monitoring are already in existence and provide many benefits to older people, such as allowing them to remain in their own home when recovering from an illness, but they could also lead to isolation from family and friends. Some users may be unfamiliar with the technologies and be unable to give consent to their use.”

The RAEng report recommends “engaging early in public consultation” and working to establish “appropriate regulation and governance so that controls are put in place to guide the development of these systems”.

rdf:SeeAlso Autonomous tech ‘requires debate’; Scientists ponder rules and ethics of robo helpers; Robot cats could care for older Britons.

(via Mike Wooldridge)

I want the iPhone NG, but …

June 12th, 2008, by Anupam Joshi, posted in Apple, Gadgets, Mobile Computing, Technology Impact

I admit — I was following along on engadget’s liveblog of Jobs’ WWDC keynote, looking for iPhone news. Most of what he said, though, was fairly old news to those who had been reading the tech blogs for the last month or so — 3G and aGPS, besides of course the already announced software upgrades. The big thing was the $199 price, which was out of the blue it seemed. I figured I would go out and get one pretty much as soon as they were available without having to stand in a line. The teeny voice in my head however was expressing skepticism, which eventually was proven correct. The $199 cost factors in a subsidy from AT&T, and the phone now apparently needs to be activated when bought.  No more buying it without AT&T service and then getting it unlocked.

I wonder why that is, though. The big claim is that the revenue model has changed, and so Apple no longer gets an ongoing cut of the revenue from AT&T. If so, why not also sell unlocked versions of the phone sans subsidy, like every other manufacturer ? How will this work in other countries where handset subsidies are not common ? Apparently AirTel in India is the preferred partner and will launch this phone “soon”.  So will AirTel sell it for more than $199, but unlocked ?  Maybe I can get one from them ? Or wait for Xperia X1 ? Or for TouchPro ?

BusinessWeek ranks 50 most innovative companies

April 19th, 2008, by Tim Finin, posted in Computing Research, Technology Impact

Businessweek Magazine has a special set of articles on innovation in business in its April 28 issue. As in the past, they identified and tanked the 50 most innovative companies worldwide. The list of companies ranked in order are as follows

01. Apple
02. Google
03. Toyota Motor
04. General Electric
05. Microsoft
06.Tata Group
07. Nintendo
08. Procter & Gamble
09. Sony
10. Nokia
11. Amazon.Com
12. IBM
13. Research In Motion
14. BMW
15. Hewlett-Packard
16. Honda Motor
17. Walt Disney
18. General Motors
19. Reliance Industries
20. Boeing
21. Goldman Sachs Group
22. 3M
23. Wal-Mart Stores
24. Target
25. Facebook
26. Samsung Electronics
27. AT&T
28. Virgin Group
29. Audi
30. Mcdonald’S
31. Daimler
32. Starbucks
33. Ebay
34. Verizon Communications
35. Cisco Systems
36. ING Groep
37. Singapore Airlines
38. Siemens
39. Costco Wholesale
40. HSBC
41. Bank Of America
42. Exxon Mobil
43. News Corp.
44. BP
45. Nike
46. Dell
47. Vodafone Group
48. Intel
49. Southwest Airlines
50. American Express

It’s gratifying to see how many of these are companies based on computing and/or communications or have a business that is largely based on exploiting the latest computing and communications technologies. I think that it is appropriate to look at IT and communications as a group, even though they are traditionally viewed as different business sectors, because the innovations in each tends to be in areas where they overlap.

The distribution of the country in which these 50 companies are based is interesting. Of course, many of these are truly multi-national corporations .

COuntires where the 50 innovative companies are based

Software–Defined Radio Could Unify Wireless World

February 5th, 2006, by Amit, posted in Mobile Computing, Technology Impact

Technicians in Ireland are testing a device capable of skipping between incompatible wireless standards by tweaking its underlying code. A report from NewScientist states:

The device can impersonate a multitude of different wireless devices since it uses reconfigurable software to carry out the tasks normally performed by static hardware… The technology promises to let future gadgets jump between frequencies and standards that currently conflict. A cellphone could, for example, automatically detect and jump to a much faster Wi-Fi network when in a local hotspot.

Korea’s Preschoolers Use Internet Daily

February 3rd, 2006, by Tim Finin, posted in GENERAL, Technology, Technology Impact, Web

I don’t know if this report is good new or bad news and, if either, who it is good or bad for.

“The Information and Communication Ministry conducted the survey together with the National Internet Development Agency of Korea. It found that Internet use among five-year-olds surveyed was 64 percent, among four-year-olds 47 percent and among three-year-olds 34 percent. Young children on average started using the Internet at 3.2 years of age and spent on average 4.8 hours a week online. Some 93 percent of the diminutive respondents used the Internet to play games or access music, but 39 percent used the web for “study,” the survey finds.”

Maybe it’s bad for Korean preschoolers who should be playing with each other or with their Legos. Or maybe it’s bad for slothful preschoolers everywhere else who will end up working for the Koreans when they grow up. I guess it’s good for the Internet, unless the preschoolers all start blogs.

[spotted on Smart Mobs]

Ping-O-Matic temporarily down

January 13th, 2006, by Pranam Kolari, posted in Blogging, Technology, Technology Impact, Web

Ping-O-Matic, a great tool and arguably the most popular update ping service is currently down. Matt blogs about a complete revamp. Apparently their current system was accepting pings on just one box!. Technorati is helping them out.

Most of us don’t even bother to check which update ping services our blog software notifies automatically. Now, is this a good enough motivation to notify additional update ping services ? If yes, who is set to gain? Given the recent valuation of, a short downtime of Ping-O-Matic might well create another multi-million dollar asset.

Attention WordPress users!!! from Nick Starr, Ping-o-Matic is offline from Jeff Smith, Pingomatic is gone from Alan Fraser.

Smart doorknob: an exciting RFID application

November 27th, 2005, by Harry Chen, posted in Computing Research, GENERAL, Pervasive Computing, RFID, Technology, Technology Impact, Wearable Computing

Here is what a smart doorknob can do.

“When you approach the door and you’re carrying groceries, it opens and lets you in. This doorknob is so smart, it can let the dog out but it won’t let six dogs come back in.

It will take FedEx packages and automatically sign for you when you’re not there. If you’re standing by the door, and a phone call comes in, the doorknob can tell you that ‘you’ve got a phone call from your son that I think you should take.”

This smart doorknob is part of a MIT research project called “Internet of Things” (see IHT). An interesting thing about this system is that it relies on the extensive usage of RFID tags. When it comes to RFID technology, some people are very worried, and some others are very excited.

The next Big Thing, or is it Web 2.0?

November 3rd, 2005, by Anand, posted in GENERAL, Semantic Web, Technology Impact, Technology Policy, Web

Open Source software has increasingly grown in popularity and dominance, challenging the likes of Microsoft, Oracle, and IBM. Both Industry and Academia have adopted Open Source Software like Linux, OpenBSD, Apache, MySQL and OpenOffice to replace or supplant commercial versions of Windows XP, Websphere, Oracle, DB2, and MS Office. This dominance will be seen to continue to grow in the coming years.

Giants like Google, Amazon, eTrade, and eBay use Open Source Software to run their web businesses/services. The tradeoff to paying royalties or license fees, is the availability of source code, which is closely scrutinized or safety tested, by these companies and then deployed. Thus, these companies no longer depend on licensed proprietary solutions.

Google Ads and the roaring profits made by Google in its last quarter have led to Google stocks jumping by around 50 dollars in less than a month. Online targeted advertising has been seen to be more effective and more companies are now investing in online advertising like Google Ads.

Open source software projects and their “profitability” have often been questioned and even dismissed as a fools errand. However now bighshots like Microsoft, IBM and Oracle amongst others seem to have formulated strategies to cope with Open Source. Venture captitalists have been pouring money into Open Source Projects — a sign that this is seen as next big thing. Companies dismissing Open Source or failing to adapt to it, risk losing their user base and affecting their longterm survivability.

Microsoft: Shared Source, Windows Live, Office Live

IBM: Open Source Acquisitions, Adoption of Open Source (support model)

Oracle: Free version of the Oracle database

Everyone wants a piece of the online-advertising pie. With the increasing growth of high-speed internet, people are growing to expect free services on the Internet. The success of XBox-live is a sign of things to come.

The availalbility of cheap/free software replacements for most of the popular commercial products will see further decline in the revenue for commercial products.

Software Companies seem to be realizing that in the coming decade, online software services will be a major source of revenue — search, ads, trading, gaming, and so on. The “free” Internet Browsers will be the gateways to the online world, while the stored PC programs will see a declining role.

Want Microsoft Source code?

October 21st, 2005, by Anand, posted in GENERAL, Programming, Technology Impact, Technology Policy

Microsoft Shared Source Initiative

These new licenses represent a broad spectrum of approaches needed to facilitate an ever-growing, rich set of technologies for release.

The three licenses are:

• Microsoft Permissive License (Ms-PL) — The Ms-PL is the least restrictive of the Microsoft source code licenses. It allows you to view, modify, and redistribute the source code for either commercial or non-commercial purposes. Under the Ms-PL, you may change the source code and share it with others. You may also charge a licensing fee for your modified work if you wish. This license is most commonly used for developer tools, applications, and components.

• Microsoft Community License (Ms-CL) — The Ms-CL is a license that is best used for collaborative development projects. This type of license is commonly referred to as a reciprocal source code license and carries specific requirements if you choose to combine Ms-CL code with your own code. The Ms-CL allows for both non-commercial and commercial modification and redistribution of licensed software and carries a per-file reciprocal term.

• Microsoft Reference License (Ms-RL) — The Ms-RL is a reference-only license that allows licensees to view source code in order to gain a deeper understanding of the inner workings of a Microsoft technology. It does not allow for modification or redistribution. This license is used primarily for technologies such as development libraries.

New RDF & OWL Editor from the Maker of XMLSpy

October 6th, 2005, by Harry Chen, posted in Ontologies, Semantic Web, Technology Impact

Altova SemanitcWorks 2006
Altova, the maker of popular XML editor XMLSpy, annouced the release of Altova SemanticWorks.

Altova SemanticWorksâ„¢ 2006 is the ground-breaking visual RDF/OWL editor from the creators of XMLSpy. Visually design Semantic Web instance documents, vocabularies, and ontologies then output them in either RDF/XML or N-triples formats. SemanticWorksâ„¢ 2006 makes the job easy with tabs for instances, properties, classes, etc., context-sensitive entry helpers, and automatic format checking. It is the sensible way to put the Semantic Web to work for you.

This is a good sign for the Semantic Web research and development community. It’s a sign that semantics is getting commericial attention. I remember seeing a similar pattern back in the old days when XML was a new term that not everyone knows. Altova released their XMLSpy in a time when many people are skeptical about the use of XML. Could this mean that one or two years from now, RDF & OWL will be the key languages for building smart applications? I surely hope so.

You are currently browsing the archives for the Technology Impact category.

  Home | Archive | Login | Feed