homemade 3d scanner - the software
 
base image
current image
difference image
enhanced difference image
the software was my foremost concern since i had never worked with any cams.
at first i tried to use the quickcam api from logitech. unfortunately, it appears not to work in a windows xp environment.
so i looked for an alternative which i thought to have found in the windows image acquisition (wia) api. that seemed to be up to the task but i did not find a way to pick images out of a stream when needed.
that is when i decided to take a look at the new directx 9 sdk, in particular its directshow component. it turned out to be flexible, fast and not too difficult to code. it is perfect for my purpose since i do not have to get into too much detail while still being able to control the camera sufficiently.

my program works as follows:
a filter graph is set up using directshow. images are then taken from the stream with the sample grabber. i do not give any details on how to do that since this can be read elsewhere in a more concise way than i ever could explain.
one way to find the laser line in the image is then by searching the image for a red line. i found this much too tedious for my simple scanner. my way of choice is to use a difference image method: the color values from a base image without a laser line are subtracted from the current image with a laser line. the resulting image shows only the difference, mainly the laser line.
since there is always some noise in the pictures a thresholding has to be done to eliminate it. before that i do some smoothing on the difference image since this helps to reduce the noise prior to threasholding. once this is done the program simply searches for the maximum color value on every horizontal image line. this is a very simple procedure that can lead to loss of information for intricate parts. but it did work fine for many objects that i have scanned so far. one reason why i do it this way is because it helps to eliminate a complicated meshing procedure of the point cloud.
after taking a picture the stepper motor rotates the laser by a defined angle. then the next image is taken and so on.
after all points have been scanned the mesh is generated. it is based on the assumption that a particular point at the intersection of an horizontal image line with the laser line in the image is nearest to another point on the same horizontal line intersected with the new laser line in the image after the laser has been rotated by a defined step.

on the left the different stages of image processing are shown. the first image is the base image with the laser turned off. the second image was taken with the laser turned on. the third image stems from the difference of the blue values of the first two images. the last is an enhanced image, which is finally used to triangulate the points in space. each point or pixel represents a direction vector that is used for the triangulation as described on the working principle page.