|Time & Place:||TuTh 9:30-10:45, 267 FPAT|
|Instructor:||Professor Hank Dietz, http://aggregate.org/hankd/, research at http://aggregate.org/|
|Audience:||Undergrad & Grad students, especially Computer Engineering/CS/EE majors|
|Abet Summary:||Preliminary ABET-style Course Summary|
|Syllabus:||Preliminary Syllabus for this Semester|
|Projects:||Register with the server|
|Assignment 0: Until The End Of Time, due September 27|
|Assignment 1: As Fast As Fast Can Be, due October 20
(there was a problem with the server; extended until 11:59PM Oct. 20)
This is essentially a project-intensive embedded computing course about digital cameras.
Over less than two decades, electronic cameras have evolved from dumb, low-quality, analog devices into powerful digital computing systems capable of delivering images with quality challenging that previously obtained only by careful use of medium or large format film. In fact, a typical modern camera contains one or more 32-bit cores running relatively powerful operating systems, such as Linux. Digital cameras are now capable of much more than simply capturing high-quality images. The term "computational photography" has become associated with some of these new abilities, especially those related to image processing. However, treating cameras as computing systems isn't just about what happens after an image is captured.
This course will begin with an introduction to the basic principles of photography and operation of digital cameras. No experience or expertise in photography is required as a prerequisite, although many aspects of the discussion will be motivated by explaining photographic techniques. About one third of the course will be about the basic mechanisms and processes asociated with digital cameras.
The second third of the course will focus on control of image capture. Various techniques and mechanisms will be discussed, including in-camera environments such as Android and CHDK (the Canon Hack Development Kit, which allows users to run arbitrary C code inside PowerShot cameras). Tethered control also will be discussed in detail. Each student will implement at least one program controlling camera capture and will test it using actual camera hardware.
The last third of the course will center on novel types of digital manipulations of captured image data. Emphasis will be placed on techniques that are intimately tied to specific aspects of the capture process -- this is not a course in classical image processing and is intended to have minimal overlap with the Matlab-heavy Computational Photography course that was offered by the UK CS department (Fall 2011 CS585).
A wide variety of programming environments are used in digital cameras and subsequent post-capture processing. It is assumed that students have sufficient background to be comfortable writing modest C/C++ programs using a minimal Linux development environment. Any other environments that are required to be used in the course will be discussed in the course materials and lectures. Digital cameras will be provided for the projects.
The two course numbers are meant to distinguish between undergrad and graduate students taking the course. The primary difference is that graduate students will be required to implement a somewhat larger, more open-ended, project which they also will present to the class. All students will learn how digital cameras work both as photographic devices and as computers, and will gain some experience using them as computing devices to obtain higher quality images and to perform a range of tasks conventional cameras cannot.
About the instructor: Professor Dietz is known primarily for his work in supercomputing, but has a long history in photography. He was photo editor for his high school newspaper and yearbook, and as an undergrad for Columbia University's Broadway Magazine. He has had news photos published in the New York Times and shot a full-page color ad for Hammacher-Schlemmer which appeared in the Saturday Evening Post. Photography faded into a hobby as he became known for his computer engineering research... until 1996, when he built 30MP video wall for a cluster supercomputer. Since then, he has been doing all sorts of work treating cameras as computing systems, trying to improve image quality and give cameras new abilities. See the Digital Imaging Technology page at Aggregate.Org for an overview of his work in this area, which ranges from anaglyph and other forms of "3D" capture to development of a new type of photographic sensor.