This CGI simply allows you to convert between 16-bit floats and their integer representations. In CPE380, we have adopted a mutant float format that is essentially just the top 16 bits of what the IEEE 754 standard calls binary32. Nothing wrong with that; for example, ATI GPUs originally dropped the bottom 8 bits of binary32 because that simplified computations while still allowing 32-bit floats to copy in/out with the bottom 8 bits simply set to 0. The same trick works for us with the bottom 16 bits. For that matter, shortly after we adopted this format in EE480, Google adopted this format as bfloat, 16-bit "brain" float, because it has sufficient accuracy and range for most neural network training. The main advantage to this format is that it is easily converted to/from the standard 32-bit float format, yet easily yields single-cycle implementations of the basic arithmetic operations.
The C program that generated this page was written by Hank Dietz using the CGIC library to implement the CGI interface.