Tuesday, September 19, 2006, 11:00am - Tuesday, September 19, 2006, 12:00pm
Supercomputing applications including weather forecasting, hurricane modeling, and visualisation are characterized by the use of extremely large datasets (terabytes of data). These applications typically operate in multiple passes over the datasets, each pass producing an intermediate result used by the next. Due to the mechanical components involved in the conventional disks, the speed gap between processing times and disk access times has continued to widen, disk access times remain the bottlneck when accessing large datasets. Specialized and expensive hardware is often required to meet the high-data rate requirements of such applications. The NASA/LambdaRAM project, a collaboration amongst UMBC, EVL/UIC, NASA GSFC and Northrop Grumman seeks to leverage high-speed optical networks (Lambdas) and the aggregated semiconductor memory of idle server blades interconnected by such networks, to provide much faster access times to extremely large datasets, to speedup supercomputing applications. I have been working on part of this project over the summer for NASA/GSFC. I will be talking about the challenges we have faced, the solutions, and future work.