Classification assigns objects to classes based on their attributes. Naive Bayes uses probabilities to determine the most likely class, while k-nearest neighbor (k-NN) compares attributes of unknown instances to training instances to classify based on the most common class of the nearest k instances. Normalization ensures different attribute units do not impact neighbor selection, and algorithms like naive Bayes use eager learning by generalizing training data into a model beforehand, whereas k-NN uses lazy learning by only generalizing when classifying new instances.
Related topics: