[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Benötigte Informationen nicht gefunden","missingTheInformationINeed","thumb-down"],["Zu umständlich/zu viele Schritte","tooComplicatedTooManySteps","thumb-down"],["Nicht mehr aktuell","outOfDate","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Problem mit Beispielen/Code","samplesCodeIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-02-25 (UTC)."],[[["\u003cp\u003eThis webpage presents a series of multiple-choice exercises focused on evaluating your understanding of decision tree training concepts.\u003c/p\u003e\n"],["\u003cp\u003eThe exercises cover topics such as the impact of feature manipulation on decision tree structure, the effects of altering threshold selection strategies, and the implications of multiple local maxima in information gain curves.\u003c/p\u003e\n"],["\u003cp\u003eOne question requires calculating information gain using entropy and provided data, demonstrating the practical application of decision tree principles.\u003c/p\u003e\n"]]],[],null,["\u003cbr /\u003e\n\nThis page challenges you to answer a series of multiple choice exercises\nabout the material discussed in the \"Training Decision Trees\" unit.\n\nQuestion 1 \nWhat are the effects of replacing the numerical features with their negative values (for example, changing the value +8 to -8) with the exact numerical splitter? \nThe same conditions will be learned; only the positive/negative children will be switched. \nFantastic. \nDifferent conditions will be learned, but the overall structure of the decision tree will remain the same. \nIf the features change, then the conditions will change. \nThe structure of the decision tree will be completely different. \nThe structure of the decision tree will actually be pretty much the same. The conditions will change, though.\n\nQuestion 2 \nWhat two answers best describe the effect of testing only half (randomly selected) of the candidate threshold values in X? \nThe information gain would be higher or equal. \nThe information gain would be lower or equal. \nWell done. \nThe final decision tree would have worse testing accuracy. \nThe final decision tree would have no better training accuracy. \nWell done.\n\nQuestion 3 \nWhat would happen if the \"information gain\" versus \"threshold\" curve had multiple local maxima? \nIt is impossible to have multiple local maxima. \nMultiple local maxima are possible. \nThe algorithm would select the local maxima with the smallest threshold value. \nThe algorithm would select the global maximum. \nWell done.\n\nQuestion 4\n\nCompute the information gain of the following split:\n\n| Node | # of positive examples | # of negative examples |\n|--------------|------------------------|------------------------|\n| parent node | 10 | 6 |\n| first child | 8 | 2 |\n| second child | 2 | 4 |\n\nClick the icon to see the answer. \n\n```scdoc\n# Positive label distribution\np_parent = 10 / (10+6) # = 0.625\np_child_1 = 8 / (8+2) # = 0.8\np_child_2 = 2 / (2+4) # = 0.3333333\n\n# Entropy\nh_parent = -p_parent * log(p_parent) - (1-p_parent) * log(1-p_parent) # = 0.6615632\nh_child_1 = ... # = 0.5004024\nh_child_2 = ... # = 0.6365142\n\n# Ratio of example in the child 1\ns = (8+2)/(10+6)\nf_final = s * h_child_1 + (1-s) * h_child_2 # = 0.5514443\n\ninformation_gain = h_parent - f_final # = 0.1101189\n```\n\n*** ** * ** ***"]]