The document summarizes a real-time automatic speaker segmentation system. The system uses an audio analysis front-end to extract features from short audio frames and classify them as voiced, unvoiced or silence. Speaker models are incrementally updated using a quasi-GMM approach on only voiced frames. Speaker changes are detected using Bayesian Information Criterion on the speaker models, and validated to reduce false alarms. The system was tested on broadcast news and telephone conversation datasets, achieving real-time operation with tuned parameters.