Assignment 0: Simple As 0, 1, 2, 3, ...

The goal of this first MPI project is to make sure you can use a C + MPI environment to develop and run a simple program. It's also to get you thinking about parallel vs. sequential execution... because not everything is better in parallel.

To Begin

For this course, we'll be using OpenMPI, which is freely available from http://www.open-mpi.org/. There are lots of documents about using it there too.

Installing OpenMPI is pretty easy. If you're using Linux, there's a standard package for it in nearly every distro. The download link for v1.8 has source code, RPMs, etc. There is even a pre-compiled Cygwin version. Of course, you'll also need a version of a GCC-compatible C compiler, which is also easy to install if it isn't already there (even under windows).

The Point Of The Project

It's a trivial first project. Relax. The point is mostly that you can actually write and run a trivial MPI program, and secondarily that you learn a bit about the problems that flow from the generality of MIMD.

Functional Requirements

Your program is to simply do the equivalent of this C code:

int main()
{
  int i;
  for (i=0; i<16; ++i) {
    printf("Hi from PE%d!\n", i);
  }
  exit(0);
}

The catch is that each of 16 logical processing elements (PEs) should print its own message. In fact, they can even use printf(). Easy, right?

Well, not quite that easy. You see, unlike running on deterministic SIMD hardware, having each of 16 MIMD PEs printing at the same time means you would very likely see the output ordered differently every time you run the program. Thus, you'll need to do some communications to order the printing. PE 0 can print immediately. Once it is done printing, it should send a message to PE 1, notifying PE1 that it can print. PE 1 prints and then sends to PE 2, which send to PE 3, etc., sequencing all the printing by the PEs. Of course, when PE 15 is done, the entire program should terminate. Be careful not to have the program terminate when the first PE has completed printing.

What You Should Find

With high probablility, the above sequence of message sends will correctly order printing. However, not necessarily. You see, printf() can take a different path through the network from MPI messages -- even if that path differs only in the software protocol used. In other words, actual transmission of a print message could be delayed until long after an MPI message apparently sent after it has been transmitted. How often does printing occur out of order with/without the MPI sends to impose sequencing?

For what it's worth, the method usually used to solve this kind of problem involves designating one PE to do all the sequential printing. You'll often hear of this as a master/slave or manager/worker arrangement: one PE handles sequencing everything while the other PEs simply work on their parts.

Due Dates, Submission Procedure, & Such

You will be submitting source code, a make file, and a simple HTML-formatted "implementor's notes" document called a0.html, which also should summarize your findings (i.e., how often output was in jumbled order with vs. without using MPI messages to sequence). Please arrange your make file so that the executable file, which should be called a0, is compiled when one simply types make. All of the things needed should be packed into a single tar file for submission by a command like tar -zcvf a0.tgz files -- and that tar is all you submit. Do not include executable files, or other things that can be generated by make, in the tar you submit.

For full consideration, your project should be submitted no later than February 10, 2015. After you have made an account by registering with the course server here, submit your project tar file here:

Your email address is .
Your password is .

Your section is EE599-001 (undergrad) EE699-001 (grad)


EE599/699 Cluster/Multi-Core Computing