This document presents a new iterative method for view and illumination invariant image matching. The method iteratively estimates the relationship between the relative view and illumination of two images, transforms one image to match the other's view, and normalizes their illumination for accurate matching. This is done by extracting features to estimate the transformation matrix between images, then estimating the illumination relationship and transforming one image's histogram to match the other. The performance is improved over traditional methods and is stable against large changes in view and illumination. The method could fail if the initial view and illumination estimates fail. It provides a new way to evaluate traditional feature detectors based on their valid angle and illumination change ranges.