References: CPE380 Arithmetic (ALUs)

The lecture slides as a PDF provide a good overview of everything with LOTS of Verilog code showing how everything is done in detail. I don't expect students to be able to write Verilog implementations like these, but I do expect that you can follow the logic, and the hope is that you're getting increasingly comfortable with looking at Verilog. Keep in mind that there are copies of many of the Verilog algorithms linked into the notes so that you can click on the link to run it in a Verilog simulator via your WWW browser.

Other ALU References

In the textbook, depending on version, arithmetic is either Chapter 3 or, in the 2nd Edition, Chapter 4. The book gives a reasonable description of this, especially for integer arithmetic. You should be aware of 1's complement, 2's complement, and sign + magnitude integer formats; integer addition, negation, and subtraction algorithms and hardware (ripple, lookahead, and select carry processing); and the integer multiply and divide discussed in class and the text. You should also have a basic understanding of floating point, although the slides have far more detail than the textbook.

In discussing integers, we also briefly discussed BCD (Binary Coded Decimal) and Gray codes (efficient conversion between 2's complement and Gray codes is given here). We also mentioned saturation arithmetic (as opposed to the modular integer arithmetic most commonly used). However, the primary addition to the book material on integer arithmetic was our discussion of speculative addition. Our in-class discussion was actually a slightly simplified version of what Intel used. Here is a little article discussing what (they thought) Intel implemented in the Pentium 4 based on this article. An Intel patent related to this is Carry-skip adder having merged carry-skip cells with sum cells, but it isn't exactly the mechanism used either. We barely mention saturation arithmetic, but it is used when operating on "natural" data types like pixel values and audio samples: e.g., adding two bright pixels shouldn't result in a darker pixel.

Of course, THE standard for floating point is IEEE 754 and, like most IEEE standards, it isn't free to get a copy of the standard itself. The good news is that using UK EZProxy, you can get the latest, 83-page, IEEE Std 754-2019 for free from this IEEE Xplore site. It's quite detailed, and it doesn't always explain why it does things as it does them, but overall it's actually a pretty accessible document.

It also is worth noting that, as we've mentioned many times in the lectures, floating point arithmetic differs from real number arithmetic in many important ways, which the book doesn't really cover very well. I don't expect you to understand the subtle nuances of floating point, but you should be aware that it is a very strange beast. Toward that goal, you might browse through What Every Computer Scientist Should Know About Floating-Point Arithmetic, which details many of the stranger quirks of floating point arithmetic. Don't read this reference too carefully; it is overkill for the purposes of this course. A lighter, but similar, document is The Perils of Floating Point, which is quite readable despite talking about Fortran. Most fundamentally, you should be aware of the accuraccy loss issues involving addition and subtraction and the IEEE base 2 floating point format.

Just as a cute little reminder of how log-based arithmetic works, here is the PDF for making paper slide rules. These are relevant to conventional floating point and unums, which both use exponent fields, and even more directly for LNS (log number systems), My overview of LNS, originally written for EE480, is here if you'd like to know more about them, but it's not required for CPE380. Unums and Posits were proposed by John Gustafson, and really haven't been commercially used yet; a good paper overview them is Beating Floating Point at its Own Game: Posit Arithmetic, and there's also the bfp - Beyond Floating Foint library implementing Posit arithmetic in C++ code. Again, I don't expect CPE380 students to really know anything about unums and Posits other than the fact that alternatives to floating point are getting increasing consideration as people realize just how differently floating point behaves from conventional mathematical arithmetic.

About transcendental functions: CoRDiC (Coordinate Rotation Digital Computer) algorithms were popular for calculators to use to perform floating-point arithmetic because, compared to truncated Taylor series, they don't require lots of hardware. They fell out of favor as fast floating-point hardware became more common. The catch is, lots of FPGAs can't easily provide lots of fast floating-point hardware, so CORDIC algorithms have become popular for use in FPGAs. Here are a couple of easy-to-understand explanations of CORDIC: CORDIC for Dummies and Implementing Cordic Algorithms.


CPE380 Computer Organization and Design.