The Aggregate's SC2002 Exhibit Plan
Since 1994, The Aggregate
has had a major research exhibit at every IEEE/ACM SC conference.
Starting November 16, we will be erecting our exhibit at
SC2002 in Baltimore, MD.
This WWW page summarizes our plans for the exhibit, which
will be open to the public November 18-21, 2002.
The Booth Layout
We are 20'x20' research exhibit R1659, located (as shown on this map)
surrounded by DOD, ASCI, Berkeley National Labs, and Ames
National Labs. Thanks to our long-term participation, we got
one of the first picks; we selected this spot because it is at
the front major aisle of the primary research area.
(Apparently, we were not alone in this logic; our buddies at
AMD, exhibit 1544, are about 30 feet away.)
Our booth layout is designed, with a vanishingly small budget,
to keep our research on an equal footing with the monster booths
around us. Rather than having a combination of posters, things
on tables, etc., we plan to have just two visually strong focal
points:
The central display will be a 5-sided monolith borrowing video
projectors from our rear-projection video wall. The secondary
display will be a 2'x4' rack mounting a 3x3 Athlon laptop video
wall. We cannot compete for the "biggest" cluster or video wall
at the show, but the laptop video wall still has significant
impact. Rather than having static posters, there will be:
-
"Movies" of high-end CFD and Astronomy codes run on the ageing
but still potent KLAT2 supercomputer
(actually, KLAT2 recently went to 5X its original main memory)
-
"Slide shows" of poster-like overviews of the new technologies
we are releasing this year
-
POVRAY 3D rendered simulations of the new application-specific
Flat Neighborhood Network technology applied
to clusters with 1000+ nodes
-
Two stations for interactive demos of the various new
technologies, especially the Cluster Design
Rules cluster hardware configuration software tool
and new Flat Neighborhood Network design tools
The Technologies
Things are still subject to change, but here's the list:
-
The finally-less-beta Cluster Design Rules
(CDR) tool. This coincides with our Cluster Design
Rules: Effective Cluster Design with Commodity Parts
tutorial (M13); don't tell anybody, but our PDF slides
for the tutorial are here,
-
Fairly extensive new developments extending the scalability and
enhancing the performance of Flat Neighborhood
Networks (FNNs). Design improvements allow
application-specific FNNs to provide single-switch latency and
high (application-specific bisection) bandwidth for up to 1000s
of nodes... using only commodity 24, 32, or 48-port switches!
Performance improvements include a new software interface that,
in addition to taking better advantage of FNNs, should
essentially be a replacement for the useful, but problematic,
Linux channel bonding mechanism.
-
A new Aggregate Function API (AFAPI)
release corrects a theoretically trivial (and practically
painful ;-) scaling problem with the previous versions.
-
The HELPME cluster node audio diagnostic
message utility: a very small program that allows a node
without video or operational network connections to literally
call for help and explain what's wrong... using voice synthesis
and/or morse code output via the PC speaker. (Ok, it's stupid,
but it's useful.)
-
Various new bits of science that have been developed by our
friends with our help and/or technologies. Primarily, CFD and
Astronomy codes.
-
Some preliminary work on
the use of aggressive compression technologies to improve memory
system performance. The improvement is primarily
not due to making data structures smaller, but
results from altering the addressing pattern to be closer to
that for which the processor was designed. There's a lot of new
technology here, like "Compressive Hash Functions" for improving
random data structure references. Basically, it isn't just
about cache anymore; TLBs and prefetch logic are actually
responsible for far worse performance problems....
What We Still Need...
We need to have a second cluster at the show to complement the
impressive-but-not-taken-seriously 1GHz Athlon laptop cluster
with its 9-panel LCD video wall. The second cluster also would
drive the rear-projection displays, but would not really treat
them as a 5-panel video wall. Possible choices for the second
cluster:
-
We have been less-and-less-patiently waiting for AMD to come out
with the ClawHammer (by any name). In 1999, our SC1999 exhibit
held the first public demonstrations of an Athlon cluster --
we'd really like SC2002 to be "Hammer Time." Heck, we'd be
ecstatically happy to have five 800MHz clock-crippled
Hammers....
-
Although the KAOS lab has been saving its resources for a huge
Hammer Time (We desperately want to build the first big
Hammer cluster!), we also have helped a bunch of groups
build their own Athlon MP and Athlon XP clusters. Our backup
plan would be to bring a group of nodes from one of them....
Financially, we expect our booth hardware transportation and
setup costs to be relatively minimal (did I mention the big
display will be made out of COTS wire shelving parts?). The
primary expense is people. We expect to have at least 7 people,
2 faculty and 5 students, there to staff the exhibit.
Donations will be gratefully accepted (and perhaps prodded for).
Actually, we have some money... we just want to save as much as
possible for big Hammer Time.
Contact Info
If you have any questions or comments, contact:
Professor Hank Dietz, James F. Hardymon Chair in Networking
College of Engineering
Electrical and Computer Engineering Department
453 Anderson Hall
(Office 307 EE Annex, Lab 672 Anderson Hall)
Lexington, KY 40506-0046
Office Phone: (859) 257 4701
Lab Phone: (859) 257 9695
Fax : (859) 257 3092
Email: hankd@engr.uky.edu
Home URL: http://aggregate.org/hankd/
This page is: http://aggregate.org/SCPLAN/
The only thing set in stone is our name.