This paper presents an efficient algorithm for moving object extraction from videos leveraging edge, motion, and saliency information. The methodology consists of four stages: frame generation, pre-processing, foreground generation, and the integration of cues through Conditional Random Fields (CRF) to improve segmentation accuracy. The proposed approach does not require prior knowledge or user interaction, enhancing the efficiency of object detection in dynamic environments.