This is the home page for our 17th major research exhibit at the IEEE/ACM Supercomputing conference. The exhibit is again under the title Aggregate.Org / University of Kentucky, the informal research consortium led by our KAOS (Compilers, Hardware Architectures, and Operating Systems) group here at the University of Kentucky's Department of Electrical and Computer Engineering. We are booth #1543, appropriately enough located next to NVIDIA.
Much like last year, the physically big thing in our exhibit this year is a technical demonstration consisting of a large wooden maze with four balls in it. Each of the colored balls has a different path to take (MIMD), yet it is perfectly feasible to get all the balls to their respective destinations by a series of tilts of the table (SIMD). Yes, you really can execute MIMD code on SIMD hardware with good efficiency... Click on the maze above for a ~50MB video showing the maze in action. Fundamentally, this is what our latest software does for GPUs (Graphics Processing Units). Specifically, it can take shared-memory MIMD code written in C and efficiently execute it on an NVIDIA CUDA GPU.
Why do this? GPUs thus far have not had stable, portable, programming support for general-purpose use, so there is virtually no code base for supercomputing applications. Our technology allows codes written for popular cluster and SMP target models to be used directly. New for SC10, we have rewritten the system to make the output of standard MIPS compilers, especially GCC, able to run on GPUs, thus supporting a wide range of fully-featured languages.
The code is VERY ROUGH, but alpha test release version 20101122 is now freely available as source code at http://aggregate.org/MOG/20101122/. There isn't yet any decent documentation, but there is a README and there's a 2009 paper, MIMD Interpretation On A GPU, that explains the concepts quite well.
For some years now, we've been quietly working toward improving the quality and capabilities of consumer digital cameras. The more impressive description would be "computational photography" grounded in supercomputer-based analysis and models.... Anyway, we'll be using one of the simplest results throughout our exhibit: anaglyph stereo image capture using a single shot with a single lens; this little trick doesn't even need a computer!
For details, see our Instructable: Use Your Camera To Capture "3D" Anaglyphs
Some related theses from our group:
The only thing set in stone is our name.