Vision-based information systems play a key role in the automatic recognition and prediction of unusual situations and events within a wide scope of application field, including video surveillance. Such systems may combine hundreds of cameras in order to cover large areas for surveillance. Automatic calibration of the position and focus of these cameras is decisive for the functionality and simplicity of calibrating such systems. As cameras are usually installed to match the topography, viewing angles may or may not overlap. This project deals with the concept of self-calibrating cameras, searching for a novel optimization approach for calibrating cameras and tracking objects.