This is the home page for our 18th major research exhibit at the IEEE/ACM Supercomputing conference. The exhibit is again under the title Aggregate.Org / University of Kentucky, the informal research consortium led by our KAOS (Compilers, Hardware Architectures, and Operating Systems) group here at the University of Kentucky's Department of Electrical and Computer Engineering. We are booth #202, to the left in the back portion of the 4th-floor exhibit hall (i.e., find HP, then walk past Argonne, UNM, and SLAC to our exhibit).
Although our exhibit this year will look a bit different (due to squeezing into 15'x20' because of the smaller exhibit hall), we'll still have that wooden maze with the four colored balls in it. Each of the colored balls has a different path to take (MIMD), yet it is perfectly feasible to get all the balls to their respective destinations by a series of tilts of the table (SIMD). Yes, you really can execute MIMD code on SIMD hardware with good efficiency... Click on the maze above for a ~50MB video showing the maze in action. Fundamentally, this is what our latest software does for GPUs (Graphics Processing Units). Specifically, it can take shared-memory MIMD code written in C and efficiently execute it on an NVIDIA CUDA GPU or OpenCL target (including AMD GPUs).
Why do this? GPUs thus far have not had stable, portable, programming support for general-purpose use, so there is virtually no code base for supercomputing applications. Our technology allows codes written for popular cluster and SMP target models to be used directly. New for SC10, we rewrote the system to make the output of standard MIPS compilers, especially GCC, able to run on GPUs, thus supporting a wide range of fully-featured languages. Last year's alpha test release version 20101122 is freely available as source code at http://aggregate.org/MOG/20101122/.
Our 2009 paper, MIMD Interpretation On A GPU, explains the basic concepts quite well. Come visit us at SC11 to hear what we've been doing to MOG this year and plan to do next....
Some related theses from our group:
Well, not so live now that the show is over.... We actually had three cameras shooting time-lapse sequences. The two using network cameras have not yet been converted into movies:
We also shot a movie from a Canon PowerShot running a CHDK motion-detection time-lapse script. It was intended to be a live feed, but it was unable to consistently send images via an Eye-Fi card thanks to SC's flakey wireless . Here is that movie in 1080 24FPS format; however, it is also more flexibly available from our YouTube Channel.