Companies rushing to build online virtual worlds for kids

December 31st, 2007

Today’s New York Times has a story, Web Playgrounds of the Very Young, on the growth of online virtual worlds for young children. Our children live in the same environment as we do and learn mostly by watching what we do. So it’s not surprising that any significant new uses for the Internet and Web can be adapted to a form that kids will take to.

“Trying to duplicate the success of blockbuster Web sites like Club Penguin and Webkinz, children’s entertainment companies are greatly accelerating efforts to build virtual worlds for children. Media conglomerates in particular think these sites — part online role-playing game and part social scene — can deliver quick growth, help keep movie franchises alive and instill brand loyalty in a generation of new customers.

“Get ready for total inundation,” said Debra Aho Williamson, an analyst at the research firm eMarketer, who estimates that 20 million children will be members of a virtual world by 2011, up from 8.2 million today. (src)

The story gives an example, Disney’s Pixie Hollow, that is online in a rudimentary form and set to launch next summer.

“Behind the virtual world gravy train are fraying traditional business models. As growth engines like television syndication and movie DVD sales sputter or plateau — and the Internet disrupts entertainment distribution in general — Disney, Warner Brothers and Viacom see online games and social networking as a way to keep profits growing.

Still, the long-term appetite for the youth-oriented sites is unclear. Fads have always whipsawed the children’s toy market, and Web sites are no different, analysts warn. Parents could tire of paying the fees, while intense competition threatens to undercut the novelty. There are now at least 10 virtual worlds that involve caring for virtual pets. (src)

There are many concerns, of course — privacy and safety, exploitation of our children, promoting consumerism, raising couch potatoes, etc.

One Laptop Per Child could make Computer Science more relevant

December 29th, 2007

Tomorrow’s (!) Washington Post has a good article, In Peru, a Pint-Size Ticket to Learning on how the project is working out in Arahuay, Peru.

“Doubts about whether poor, rural children really can benefit from quirky little computers evaporate as quickly as the morning dew in this hilltop Andean village, where 50 primary school children got machines from the One Laptop Per Child project six months ago.
A boy writes on his laptop in Arahuay, an Andean hilltop village in Peru, Tuesday, Dec. 12, 2007. Doubts about the One Laptop Per Child project evaporate about as quickly as the morning dew in this Andean hilltop village where 50 primary school students have spent six months with the little green machines.(AP Photo/Martin Mejia)
These offspring of peasant families whose monthly earnings rarely exceed the cost of one of the $188 laptops — people who can ill afford pencil and paper much less books — can’t get enough of their XO devices.

At breakfast, they’re already powering up the combination library/videocamera/audio recorder/musicmaker/drawing kits. At night, they’re dozing off in front of them — if they’ve managed to keep older siblings from waylaying the coveted machines.” (src)

Computer Science departments in North America and Europe are struggling to increase their enrollments after the decline that started when the dotcom bubble deflated. Most are following a strategy to find ways to engage students by showing them that the field is both interesting and socially relevant. I think that the OLPC project and others like it can help do this. It will be motivating for many of our students to target software to this device and produce something for the good of humanity. For example, we can get some of the many students interested in game development to try to port/write educational games to the XO. The XO laptop is custom hardware running a stripped-down Red Hat Linux with a custom user interface and has XO emulators available. Since not much is standard, there will probably be a big need for writing device drivers and porting lots of common open-source packages. Developing software for the XO could be a good project as part of many core computer science, computer engineering and information systems courses.

We have one of these in the department now and I hope that we can get more.

2007 Pan American Intercollegiate Team Chess Championship update

December 29th, 2007

Chess is UMBC’s most successful sport and many of the top players on the UMBC chess team have been IT majors. This year our department is represented by computer science senior Katerine Rohonyan, who holds a key seat on the UMBC A team. Two of the other three A team seats are held by Information Systems majors. UMBC CS Professor Alan Sherman sent this update from Miami on the 2007 Pan American Intercollegiate Team Chess Championship.

“The Pan-Am Intercollegiate Team Chess Championship is underway at Miami Dade College, with 28 teams from 18 schools. There are six rounds, ending 12-27. In Rd 1, UMBC-A beat Northwestern 4-0. In Rd 2, UMBC-A beat NYU 3-1. This evening, UMBC-A is playing Miami Dade in Rd 3.

From the initial lineups, the five strongest teams are UTD-A, UMBC-A, UTD-B, UT-Brownsville, Miami Dade. Other competitors include Yale, Stanford, U of Toronto, JHU, U of West Indies. After Rd 3, there will be at most four teams with perfect 3-0 scores, possibly UTD-A, UMBC-A, UTD-B, Stanford. It is likely that UMBC-A will play UTD-A tomorrow. With two strong teams, UTD has two chances to beat us.”

You can see the team standings here and the games at

UPDATE 12/29: “UMBC-A just beat UTD-B 3-1 in RD 4, with wins by IS major Bruci Lopez and CS major Katerina Rohanyan, and draws by GMs Sergey Erenburg and Pawel Blehm. We face UTD-A at 5pm today for the championship, though Rd six tomorrow (12-30) still must be played.” — Alan Sherman

UPDATE 12/30: Final Standings: (1) UTD-A 5.5; (2) UMBC-A 4.5; (3) UTD-B 4.5; (4) NYU 4.5.

UPDATE 12/31: Chess team heads to the ‘Final Four’ (Baltimore Sun)

How YouTube scales MySQL for its large databases

December 28th, 2007

Like most research labs, we rely on MySQL whenever we need a database. And like most (I’m guessing, here), it’s common to overhear something like the following in our lab — “We really need to replace MySQL with Oracle or DB2 in X so it can handle the load.” But we never get around to it.

Maybe we don’t have to. Check out Scaling MySQL at YouTube, a keynote talk by YouTube DBA Paul Tuckfield at the 2007 MySQL Conference put online by

“In mid 2006, YouTube served approximately 100 million videos in a single day. To maintain a website of that scale, one would imagine YouTube has hundreds of DBAs. But in fact, there are just three people that make it all work. Paul Tuckfield, the MySQL DBA at YouTube shares horror stories about scalability at YouTube and how he coped with them to keep the show going everyday, while learning important lessons along the way. … According to him, the three important reasons for YouTube’s scalability are Python, Memcache and MySQL replication, the last having the most impact. Most people think that the answer to scalability is in upgrading hardware and CPU power. Adding CPUs doesn’t work on its own; wisdom is in getting the maximum amount of RAM for the CPU and then fine tuning.” (src)

The Semantic Web for noobs

December 27th, 2007

Manu Sporny of Digital Bazaar put together a six minute introductory presentation about the Semantic Web.


“We’ve been heavily involved with the World Wide Web Consortium over the past several months working with the RDFa task force, chartered by the Semantic Web Deployment group. While the semantic web is many things to many people, it is quite simple at it’s core. Explaining that simplicity in less than 30 minutes, though, is quite a difficult task. We put together a very simple, playful, video that succinctly explains why the Semantic Web is such a big deal.” (src)

The presentation comes at the Semantic Web from the RDFa direction without mentioning other use cases. But how far can you go in six minutes staring from a full stop? I think that it’s well done and effective.

Immersive gaming and alternate reality games

December 27th, 2007

Alternate reality games, also known as immersive games, blend fantasy and reality in ways that blur the difference. We are not talking about virtual reality technology that require their users to don special helmets or use kinematic effectors, but games that embed their narratives and interact with players using everyday aspects of the the real world — Web sites, email, instant messages, phone calls, letters and billboards.

The genre has largely been used by conceptual artists, advertising agencies and marketeers. Here’s how Dave Szulborski describes it on his This is Not a Game site.

“Alternate Reality Gaming, sometimes also called Immersive Gaming, Viral Marketing, or Interactive Fiction, is a rapidly emerging genre of online gaming and is one of the first true art and entertainment forms that was developed from and exclusively for the Internet. Alternate Reality Games have been wildly successful when used for multimillion dollar marketing campaigns, such as the 2004 game I Love Bees, used by Microsoft to help launch the hugely anticipated X-Box video game Halo 2, and the game that started it all, the Beast, used to promote Steven Spielberg’s science fiction epic A.I.: Artificial Intelligence in 2001.”

Wired has an article, Secret Websites, Coded Messages: The New World of Immersive Games, that describes an a viral marketing campaign to promote Nine Inch Nails. Muhammad Saleem blogs about the online viral marketing campaign it used to promote the move The Dark Knight. Finally, ReadWriteWeb has an interesting post, Alternate Reality Games: What Makes or Breaks Them? that attempts to deconstruct ARGs.

Alternate reality gaming is definitely unusual, but it draws on many of the skills any student of gaming should be developing: the ability to construct a rich narrative, the capability to design an environment that reveals itself as players explore and gradually discover and solve underlying puzzles, and the skills to exploit the latest digital technologies.

Many of them are inherently social games as well, encouraging or even requiring groups of people to collaborate and share information to unravel the story.

Cloud computing with Hadoop

December 26th, 2007

The Web has become the repository of most the world’s pubic knowledge. Almost all of it is still bound up in text, images, audio and video, which are easy for people to understand but less accessible for machines. While the computer interpretation of visual and audio information is still challenging, text is within reach. The Web’s infrastructure makes access to all this information trivial, opening up tremendous opportunities to mine text to extract information that can be republished in a more structured representation (e.g., RDF, databases) or used by machine learning systems to discover new knowledge. Current technologies for human language understanding are far from perfect, but can harvest the low hanging fruit and are constantly improving. All that’s needed is an Internet connection and cycles — lots of them.

The latest approach to focusing lots of computing cycles on a problem is cloud computing, inspired in part by Google’s successful architecture and MapReduce software infrastructure.

Business Week had an article a few weeks ago, The Two Flavors of Google, that touches on some of the recent developments, including Hadoop and IBM and Google’s university cloud computing program. Hadoop is the produce of an Apache Lucene project that provides a Java-based software framework to distribute processing over a cluster of processors. The BW article notes

“Cutting, a 44-year-old search veteran, started developing Hadoop 18 months ago while running the nonprofit Nutch Foundation. After he later joined Yahoo, he says, the Hadoop project (named after his son’s stuffed elephant) was just “sitting in the corner.” But in short order, Yahoo saw Hadoop as a tool to enhance the operations of its own search engine and to power its own computing clouds.” (source)

and adds this significant anecdote

“In early November, for example, the tech team at The New York Times rented computing power on Amazon’s cloud and used Hadoop to convert 11 million archived articles, dating back to 1851, to digital and searchable documents. They turned around in a single day a job that otherwise would have taken months.” (source)

The NYT’s Derek Gottfrid described he process in some detail in a post on the NTY Open blog, Self-service, Prorated Super Computing Fun!.

The Hadoop Quickstart page describes how to run it on a single node, enabling any high school geek who knows Java and has a laptop to try it out before finding (or renting) time on a cluster. This is just what we need in for several upcoming projects and I am looking forward to trying it out soon. One requires processing the 1M documents in the Trec 8 collection and another the 10K documents in ACE 2008 collection.

Free draft of book on Logic for Philosophy

December 23rd, 2007

NYU Professor Ted Sider has made a draft of his new book, Logic for Philosophy, available on the Web. He describes it this way:

“This will be a textbook for a “logic literacy” course. It was designed for beginning graduate students in philosophy, but it is also suitable for advanced undergraduate courses. The goal is to introduce students to the logic they need to know in order to read contemporary philosophy journal articles. It emphasizes breadth rather than depth. For example, it discusses modal logic and counterfactuals, but does not prove the central metalogical results for predicate logic (completeness, undecidability, etc.) It will be published by Oxford University Press.

This looks like a good resource for many AI students who need a good overview of logic and don’t want or need to delve into the proofs. Spotted on LTU.

Eniac Programmers Project documents first women computer programmers

December 23rd, 2007

The Eniac Programmers Project is working to produce the first feature-length documentary on the women who formed the team of programmers for the Eniac, considered by many to be the world’s first computer.

“During WWII, the world’s first software developers were recruited by the Army to program the quirky ENIAC, the first all-electronic programmable computer. These six pioneers invented many of the concepts of programming, and remarkably, in a high-tech version of Rosie the Riveter, they were all WOMEN.”

The effort is lead by executive producer Kathy Kleiman and as co-exutive producer Claudia Morrell, who is the director of UMBC’s Center for Women and Information Technology. The documentary, which as the working title Invisible Computers: The Untold Story of the ENIAC Programmers, will focus on the six women who were Eniacs’s first programmers in the 1940s and their later careers in computing. ABC News had a recent article on the project, First Computer Programmers Inspire Documentary

Lies, Damn Lies, and (the statistics on) the Number of STEM grads

December 18th, 2007

I confess to being thoroughly confused. The revealed wisdom in US higher ed has been that we are simply not producing enough grads in the STEM area, and we need to do more to attract folks to sciences/engineering/IT etc. The National Academy of Sciences weighed in on this as well. We certainly keep hearing that here in our department, with exhortations to increase enrollment.

However, the Urban institute folks (Lowell and Salzman) claim that not only is the US not lagging behind other nations in the quality of STEM education at the school level, it in fact overproduced STEM grads (three times as many as the net growth in jobs) in the period from 1985 to 2000. So not enough or too many STEM grads — which is it ?

This of course further muddies the immigration/ H1B debates. The IT industry claims that there is a shortage of IT grads, and so they need to be able to hire more from overseas. The “Immigration Restrictionists” of various flavors, and the Programmers Guild like organizations, argue that this is just a part of plan by corporations to keep the wages in the IT sector depressed. Many of them have blogged about this new Urban Institute study, offering it as proof that the H1B type programs can be scrapped.

However, if the primary push behind lobbying for increased skilled immigration/H1 workers was depressing (or at least not increasing) the wages, then a factor of three overproduction within the US should take care of this, right ? In other words, all the folks in STEM fields who weren’t getting jobs in their area would sign up for short MSCE/CCNA type courses (or AAs in IT) and then get hired. I presume Bill Gates or others don’t particularly like foreigners enough to go through and pay for the H1B/Green card process when they would achieve the same wage depressing affects by hiring US citizens retrained in IT areas from the oversupply in the overall STEM areas?  On the other hand, there is  a recent statement by Fed chief Bernanke doing rounds of the blogosphere that a non increase in STEM wages would indicate that there wasn’t a shortage in the area. 

Net result, I am not sure what to believe anymore.  In admissions events, I dutifully present data from CRA (which in turn got it from BLS)  that seems to indicate that within the wider STEM areas, IT (strictly, Mathematical and Computer Sciences) would be the subfield where the total production of degrees would fall short of the projected job openings, even factoring in all the outsourcing.

Google Maps adds location Information

December 18th, 2007

I recently bought a GPS (Garmin Mobile 10) that works with my WM5 Smartphone. In the process of trying to install the Garmin Mobile XT application (which was very problematic and a huge pain, but I digress ….), I ended up uninstalling Google Maps.

When I went to download and reinstall it though, I noticed that they have a new beta feature (My Location) that shows you where you are. It can either use a GPS, or use cell tower information. Basically, it sees which cell tower your phone is signed up to (and what signals it is seeing from others), and uses this to estimate where you are to within a 1000 meters.

This is interesting, because we did it the same way back when there used to be AMPS / CDPD and Palm IIIs and Vs with cellular modems. Our project was called Agents2Go, and we published a paper about this in the MCommerce workshop of Mobicom in 01. I remember that Muthu et al from AT&T had a similar paper in MobiDE that year as well.

The problem at that time was that there was no publicly accessible database of all cell tower locations. Also, we heard informally from at least one telco that while doing this for research was Ok, if anyone ever tried to make money from it they would want to be a part of the loop. I guess Google has found a way to work with the various telcos ? Or maybe in the interim cell tower ids and locations have been made public knowledge ?

Of course Google maps also works with GPS, except that it refuses to work with my Garmin. I’ve tried all the tricks that a search on Google will reveal (mainly, setting the serial port used by Bluetooth to talk to the GPS) , but to no avail 🙁

Google Reader clips broken — GRC_p undefined?

December 18th, 2007

We use Google Reader clips as a simple way to share links on a number of our web sites. As I browse feeds and see a post that’s relevant to one of our blogs, UMBC GAIM for example, I can tag it with for-gaim and the link will show up in a sidebar on the GAIM site.

Today I noticed that none of this is working. Checking the javascript console, I see that the browser is complaining that GRC_p is not defined, so it seems like an error in Google’s javascript. I’ve not sen anything on the web about this (yet) except for some old posts from the summer. Does anyone know what’s going on?