UMBC ebiquity
IBM

IBM Research projects on social computing

March 13th, 2009, by Tim Finin, posted in Social media

Charles Cooper has an article on CNET on How IBM’s sprucing up its ‘social’ side. He attended an IBM event (“Smarter Web Open House”) in which researchers from IBM offered “a peek at a cross-section of collaborative Web technologies–mostly in early beta stages and likely to need a lot more fine-tuning in the months ahead.” He writes that

IBM is putting serious effort into finding ways to use aspects of social computing for more collaboration among enterprise users. The big idea here being to make it easier for businesses to share corporate data in more useful fashion.

(h/t ACM TechNews)

“Our perspective comes from business,” said Rod Smith, a computer scientist who is in charge of emerging Internet technologies at IBM. “There are many ecosystems inside the enterprise and we’re seeing how they want to expand those connections. So, we’re looking at how to do that.”

The article describes sever interesting projects including a Web mashup that creates a virtual medical room where physicians can review and comment on test data.

On Larrabee and how multi-core computers will change CS education

August 7th, 2008, by Anupam Joshi, posted in CS, GENERAL, High performance computing, MC2, Multicore Computation Center, Programming

My colleague Marc Olano recently blogged about the new Larrabee chip from Intel, which will be described in a SIGGRAPH paper in a session he is chairing. This chip, with multiple old Pentium type cores running at 1GHz, seems a logical culmination of the recent multi/many core trend. IBM’s plans with the Cell/BE, and perhaps with the newer generation Power Chips, are also headed in a similar direction. Short of material scientists doing some magic with high K dielectrics or airgaps or CNFETs or whatever, the trend seems to be away from a single CPU with more transistors running faster and faster to multicored chips not clocked very fast. There’s a good reason for it (heat), as anyone who’s had a high end laptop and actually put it on their laps can testify. Further down the road, even more complex parallel architectures are proposed, with MCMs on chip connecting optically, and perhaps even memory stacked on top of the CPU layer talking optically back and forth! In other words, a few years down the road, the default box on which a system builder will write code will be something other than a single cored CPU. Bernie Meyerson from IBM discusses such issues in his talks — I can’t lay my hands on a publicly available power point, but some of the ideas are discussed in a recent interview.

Do these developments mean that we should be rethinking Programming 1 and 2, especially for CS majors. Do students now need to think parallel or multi-threaded programming from day one? Can that be done without first doing standard imperative programming? Given the less than ideal state of high school CS education, is it realistic to expect that students will get Programming 1 (and maybe 2) in high school? In our department, we’re offering class on programming the Cell/BE, and a course related to GPU programming, but those are typically meant for seniors. How about courses further upstream. Should data structures and algorithms change — maybe concepts like transactional memory need to be introduced ? Should OS change — talk much more about virtualization, and redoing virtual memory when ample NVRAM is available and accessible from a core ?

Dell trying to trademark cloud computing

August 3rd, 2008, by Tim Finin, posted in cloud computing, Multicore Computation Center, Semantic Web, Social media

Cloud computing is a hot topic this year, with IBM, Microsoft, Google, Yahoo, Intel, HP and Amazon all offering, using or developing high-end computing services typically described as “cloud computing”. We’ve started using it in our lab, like many research groups, via the Hadoop software framework and Amazon’s Elastic Compute Cloud services.

Bill Poser notes in a post (Trademark Insanity) on Language Log that Dell as applied for a trademark on the term “cloud computing”.

It’s bad enough that we have to deal with struggles over the use of trademarks that have become generic terms, like “Xerox” and “Coke”, and trademarks that were already generic terms among specialists, such as “Windows”, but a new low in trademarking has been reached by the joint efforts of Dell and the US Patent and Trademark Office. Cyndy Aleo-Carreira reports that Dell has applied for a trademark on the term “cloud computing”. The opposition period has already passed and a notice of allowance has been issued. That means that it is very likely that the application will soon receive final approval.

It’s clear, at least to me, that ‘cloud computing’ has become a generic term in general use for “data centers and mega-scale computing environments” that make it easy to dynamically focus a large number of computers on a computing task. It would be a shame to have one company claim it as a trademark. On Wikipedia a redirect for the Cloud Computing page was created several weeks before Dell’s USPTO application. A Google search produces many uses of cloud computing in news articles before 2007, although it’s clear that it’s use didn’t take off until mid 2007.

An examination of a Google Trends map shows that searches for ‘cloud computing’ (blue) began in September 2007 and have increased steadily, eclipsing searches for related terms like Hadoop, ‘map reduce’ and EC2 over the past ten months.

Here’s a document giving the current status of Dell’s trademark application, (USPTO #77139082) which was submitted on March 23, 2007. According to the Wikipedia article on cloud computing, Dell

“… must file a ‘Statement of Use’ or ‘Extension Request’ within 6 months (by January 8, 2009) in order to proceed to registration, and thereafter must enforce the trademark to prevent removal for ‘non-use’. This may be used to prevent other vendors (eg Google, HP, IBM, Intel, Yahoo) from offering certain products and services relating to data centers and mega-scale computing environments under the cloud computing moniker.”

Petrini: Streaming Applications on the Cell BE Processor, 3pm 5/13 UMBC

May 5th, 2008, by Tim Finin, posted in GENERAL, High performance computing, MC2

Next Monday (3:00pm, May 13), Fabrizio Petrini will visit and give a presentation on Streaming Applications on the Cell B.E. Processor. Here’s the abstract:

“We increasingly need to process large and complex data volumes to enable near-real-time informed human decisions or automated response actions. Current limitations in I/O and processing capabilities hinder the timely acquisition, processing, and presentation information to decision makers for rapid response. Multi-core processors, such as the Cell B.E. processor, provide an unprecedented computational capability to curb this data deluge. In this talk I will describe the challenge in designing new data streaming algorithms for multi-core processors and and present some recent results obtained with the Cell B.E. processor.”