SlideShare a Scribd company logo
Microsoft Decision Trees Algorithm
OverviewDecision Trees AlgorithmDMX QueriesData Mining usingDecision TreesModel Content for a Decision Trees ModelDecision Tree ParametersDecision Tree Stored Procedures
Decision Trees AlgorithmThe Microsoft Decision Trees algorithm is a classification and regression algorithm provided by Microsoft SQL Server Analysis Services for use in predictive modeling of both discrete and continuous attributes.For discrete attributes, the algorithm makes predictions based on the relationships between input columns in a dataset. It uses the values, known as states, of those columns to predict the states of a column that you designate as predictable. For example, in a scenario to predict which customers are likely to purchase a motor bike, if nine out of ten younger customers buy a motor bike, but only two out of ten older customers do so, the algorithm infers that age is a good predictor of the bike purchase.
Decision Trees AlgorithmFor continuous attributes, the algorithm uses linear regression to determine where a decision tree splits.If more than one column is set to predictable, or if the input data contains a nested table that is set to predictable, the algorithm builds a separate decision tree for each predictable column.
DMX QueriesLets understand how to use DMX queries by creating a simple tree model based on the School Plans data set.The table School Plans contains data about 500,000 high school students, including Parent Support, Parent Income, Sex, IQ, and whether or not the student plans to attend School. using the Decision Trees algorithm, you can create a mining model, predicting the School Plans attribute based on the four other attributes.
DMX Queries(Classification)CREATE MINING STRUCTURE SchoolPlans(ID LONG KEY,Sex TEXT DISCRETE,ParentIncome LONG CONTINUOUS,IQ LONG CONTINUOUS,ParentSupport TEXT DISCRETE,SchoolPlans TEXT DISCRETE)WITH HOLDOUT (10 PERCENT)ALTER MINING STRUCTURE SchoolPlansADD MINING MODEL SchoolPlan( ID,Sex,ParentIncome,IQ,ParentSupport,SchoolPlans PREDICT)USING Microsoft Decision TreesModel Creation:
DMX Queries(Classification)INSERT INTO SchoolPlans     (ID, Sex, IQ, ParentSupport,       ParentIncome, SchoolPlans)OPENQUERY(SchoolPlans,     ‘SELECT ID, Sex, IQ, ParentSupport,          ParentIncome, SchoolPlans FROM SchoolPlans’)Training the SchoolPlan Model
DMX Queries(Classification)SELECT t.ID, SchoolPlans.SchoolPlans,        PredictProbability(SchoolPlans) AS [Probability]FROM SchoolPlans         PREDICTION JOIN     OPENQUERY(SchoolPlans,    ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome    FROM NewStudents’) AS tON SchoolPlans.ParentIncome= t.ParentIncome ANDSchoolPlans.IQ = t.IQ ANDSchoolPlans.Sex= t.Sex ANDSchoolPlans.ParentSupport= t.ParentSupportPredicting the SchoolPlan for a new student.This query returns ID, SchoolPlans, andProbability.
DMX Queries(Classification)SELECT t.ID,        PredictHistogram(SchoolPlans) AS [SchoolPlans]   FROM SchoolPlans           PREDICTION JOIN       OPENQUERY(SchoolPlans,           ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome      FROM NewStudents’) AS t        ON SchoolPlans.ParentIncome= t.ParentIncome ANDSchoolPlans.IQ = t.IQ ANDSchoolPlans.Sex= t.Sex ANDSchoolPlans.ParentSupport= t.ParentSupportnQuery returns the histogram of the SchoolPlans predictions in the form of a nested table.Result of this query is shown in the next slide.
DMX Queries(Classification)
DMX Queries (Regression)Regression means predicting continuous variables using linear regression formulas based on regressors that you specify.        ALTER MINING STRUCTURE SchoolPlans            ADD MINING MODEL ParentIncome              ( ID,              Gender,               ParentIncome PREDICT,              IQ REGRESSOR,             ParentEncouragement,            SchoolPlans             )        USING Microsoft Decision Trees            INSERT INTO ParentIncomeCreating and training a regression model toPredict ParentIncome using IQ, Sex, ParentSupport, and SchoolPlans. IQ is used as a regressor.
DMX Queries (Regression) SELECT t.StudentID, ParentIncome.ParentIncome,       PredictStdev(ParentIncome) AS DeviationFROM ParentIncome        PREDICTION JOIN             OPENQUERY(SchoolPlans,           ‘SELECT ID, Sex, IQ, ParentSupport,            SchoolPlans FROM NewStudents’) AS t         ON ParentIncome.SchoolPlans = t. SchoolPlans AND          ParentIncome.IQ = t.IQ AND              ParentIncome.Sex = t.Sex AND            ParentIncome.ParentSupport = t. ParentSupportContinuous prediction using a decision tree to predict the ParentIncome for new students and the estimated standard deviation for each prediction.
DMX Queries(Association)CREATE MINING MODEL DanceAssociation        (        ID LONG KEY,        Gender TEXT DISCRETE,         MaritalStatus TEXT DISCRETE,           Shows TABLE PREDICT       (    Show TEXT KEY       )        )       USING Microsoft Decision TreesAn example of an associative trees model built on a Dances data set.
Each Show is      considered an attribute with binary states— existing or missing.
DMX Queries(Association)  INSERT INTO DanceAssociation                 ( ID, Gender, MaritalStatus,                  Shows (SKIP, Show))                     SHAPE                     {             OPENQUERY (DanceSurvey,           ‘SELECT ID, Gender, [Marital Status]             FROM Customers ORDER BY ID’)                }           APPEND              (              {OPENQUERY (DanceSurvey,            ‘SELECT ID, Show         FROM Shows ORDER BY ID’)}            RELATE ID TO ID                )AS ShowsTraining an associative trees modelBecause the model contains a nested table, the training statement involves     the Shape statement.
DMX Queries(Association)Training an associative trees modelSuppose that there is a married male customer who likes the Michael Jackson’s Show.This query  returns the other five Shows this customer is most likely to find appealing.SELECT t.ID,Predict(DanceAssociation.Shows,5, $AdjustedProbability)AS RecommendationFROMDanceAssociationNATURAL PREDICTION JOIN(SELECT ‘101’ AS ID, ‘Male’ AS Gender,‘Married’ AS MaritalStatus,(SELECT ‘Michael Jackson’ AS Show)AS Shows) AS t
Data Mining usingDecision TreesThe most common data mining task for a decision tree is classification  i.e. determining whether or not a set of data belongs to a specific type, or class.The principal idea of a decision tree is to split your data recursively into subsets. The process of evaluating all inputs is then repeated on each subset.When this recursive process is completed, a decision tree is formed.
Data Mining usingDecision TreesDecision trees offer several advantages over other data mining algorithms.Trees are quick to build and easy to interpret. Each node in the tree is clearly labeled in terms of the input attributes, and each path formed from the root to a leaf forms a rule about your target variable. Prediction based on decision trees is efficient.
Model Content for a Decision Trees ModelThe top level is the model node. The children of the model node are its tree root nodes. If a tree model contains a single tree, there is only one node in the second level. The nodes of the other levels are either intermediate nodes (or leaf nodes) of the tree. The probabilities of each predictable attribute state are stored in the distribution row sets.
Model Content for a Decision Trees Model
Interpreting  the Mining Model Content A decision trees model has a single parent node that represents the model and its metadata underneath  which are independent trees that represent the predictable attributes that you select. For example, if you set up your decision tree model to predict whether customers will purchase something, and provide inputs for gender and income, the model would create a single tree for the purchasing attribute, with many branches that divide on conditions related to gender and income.However, if you then add a separate predictable attribute for participation in a customer rewards program, the algorithm will create two separate trees under the parent node. One tree contains the analysis for purchasing, and another tree contains the analysis for the customer rewards program.
Decision Tree ParametersThe tree growth, tree shape, and the input output attribute settings are controlled using these parameters .You can fine-tune your model’s accuracy by adjusting these parameter settings.
Decision Tree ParametersCOMPLEXITY _PENALTYis a floating point number with the range [0,1] which controls how much penalty the algorithm applies to complex trees.When the value of this parameter is set close to 0, there is a lower penalty for the tree growth, and you may see large trees.When its value is set close to 1, the tree growth is penalized heavily, and the resulting trees are relatively small.If there are fewer than 10 input attributes, the value is set to 0.5.if there are more than 100 attributes, the value is set to 0.99. If you have between 10 and 100 input attributes, the value is set to 0.9.
Decision Tree ParametersMINIMUM _SUPPORT is used to specify the minimum size of each node in a tree.For example, if this value is set to 25, any split that would produce a child node containing less than 25 cases is not accepted. The default value for MINIMUM_SUPPORT is 10.SCORE _METHOD is used to specify the method for determining a split score during tree growth. The three possible values for SCORE METHOD are:SCORE METHOD = 1 use an entropy score for tree growth.SCORE METHOD = 2  use the Bayesian with K2 Prior method, meaning it will add a constant for each state of the predictable attribute in a tree node, regardless of the node level of the tree.SCORE METHOD = 3  use the Bayesian Dirichlet Equivalent with Uniform Prior (BDEU) method.
Decision Tree ParametersSPLIT METHOD is used to specify the tree shape(binary or bushy)SPLIT METHOD = 1 means the tree is split only in a binary way. SPLIT METHOD = 2 indicates that the tree should always split completely on each attribute. SPLIT METHOD = 3, the default method the decision tree will automatically choose the better of the previous two methods. MAXIMUM_INPUT_ATTRIBUTES   is a threshold parameter for feature selection.When the number of input attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant input attributes.
Decision Tree ParametersMAXIMUM _0UTPUT_ATTRIBUTESis another threshold parameter for feature selection.When the number of predictable attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant attributes. FORCE_REGRESSOR allows you to override the regressor selection logic in the decision tree algorithm and always use a specified regressor in the regression equations in regression trees. This parameter is typically used in price elasticity models. For example, suppose that you have a model to predict Sales using Price and other      attributes. If you specify FORCE REGESSOR = Price, you get regression formulas using Price and other significant attributes for each node of the tree.
Decision Tree Stored ProceduresSet of system-stored procedures used in the Decision Tree viewer are:CALL System.GetTreeScores(‘MovieAssociation’)
CALL System.DTGetNodes(‘MovieAssociation’)
CALL System.DTGetNodeGraph(‘MovieAssociation’, 60)
CALL System.DTAddNodes(‘MovieAssociation’,‘36;34’,       ‘99;282;20;261;26;201;33;269;30;187’)
Decision Tree Stored ProceduresGetTreeScores is the procedure that the Decision Tree viewer uses to populate the drop-down tree selector. It takes a name of a decision tree model as a parameter and returns a table containing a row for every tree on the model and the following three columns:ATTRIBUTE_NAMEis the name of the tree.NODE_UNIQUE_NAME is the content node representing the root of the tree.MSOLAP_NODE_SCORE is a number representing the amount of information(number of nodes) in the tree.
Decision Tree Stored ProceduresDTGetNodes is used by the decision tree Dependency Network viewer when you click the Add Nodes button. It returns a row for all potential nodes in the dependency network and has the following two columns:NODE UNIQUE NAME1 is an identifier that is unique for the dependency network.NODE CAPTION is the name of the node.

More Related Content

PPTX
Data mining technique (decision tree)
PPT
Mca ii-dbms- u-iii-sql concepts
PDF
Clustering and Regression using WEKA
PPT
Data Mining
PPT
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
PPTX
Data Retrival
PPTX
Logical Design and Conceptual Database Design
PPTX
MS SQL SERVER: Microsoft naive bayes algorithm
Data mining technique (decision tree)
Mca ii-dbms- u-iii-sql concepts
Clustering and Regression using WEKA
Data Mining
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Data Retrival
Logical Design and Conceptual Database Design
MS SQL SERVER: Microsoft naive bayes algorithm

What's hot (20)

PDF
Decision tree lecture 3
PDF
Chapter 04-discriminant analysis
PPT
Cs501 classification prediction
PPTX
Classification techniques in data mining
PDF
Unsupervised Learning Techniques to Diversifying and Pruning Random Forest
PPTX
Decision Tree and Bayesian Classification
PPT
2.1 Data Mining-classification Basic concepts
PDF
Fuzzy Querying Based on Relational Database
PPTX
Random forest
DOCX
Bc0041
PPT
Data1
PPTX
Lect 2 getting to know your data
PDF
Chapter01 introductory handbook
PPT
Unit 3classification
PPTX
Lect9 Decision tree
PPT
2.8 accuracy and ensemble methods
PPTX
Classification in data mining
PPT
08 classbasic
PPT
08 classbasic
PDF
08 classbasic
Decision tree lecture 3
Chapter 04-discriminant analysis
Cs501 classification prediction
Classification techniques in data mining
Unsupervised Learning Techniques to Diversifying and Pruning Random Forest
Decision Tree and Bayesian Classification
2.1 Data Mining-classification Basic concepts
Fuzzy Querying Based on Relational Database
Random forest
Bc0041
Data1
Lect 2 getting to know your data
Chapter01 introductory handbook
Unit 3classification
Lect9 Decision tree
2.8 accuracy and ensemble methods
Classification in data mining
08 classbasic
08 classbasic
08 classbasic
Ad

Similar to MS SQL SERVER: Decision trees algorithm (20)

PPT
chap4_basic_classification(2).ppt
PPT
chap4_basic_classification.ppt
PPT
data mining Module 4.ppt
PPTX
Data Mining Lecture_10(a).pptx
PPT
Decision Tree based Classification - ML.ppt
PPT
Classification
PPT
Classification Slides and decision tree .ppt
PDF
20120140506004
PPTX
Basic Classification.pptx
PPT
Cluster analysis
PPTX
Unit - 3 (chap3_basic_classification).pptx
PDF
Decision-Tree-.pdf techniques and many more
PPT
Classification
PPTX
Lecture_21_22_Classification_Instance-based Learning
PPT
Classification: Basic Concepts and Decision Trees
PPTX
Decision trees for Classification & Regression.pptx
PDF
Lecture 5 Decision tree.pdf
PPTX
MS SQL SERVER: Microsoft naive bayes algorithm
PPTX
Data mining by example - building predictive model using microsoft decision t...
PPSX
Classification Using Decision tree
chap4_basic_classification(2).ppt
chap4_basic_classification.ppt
data mining Module 4.ppt
Data Mining Lecture_10(a).pptx
Decision Tree based Classification - ML.ppt
Classification
Classification Slides and decision tree .ppt
20120140506004
Basic Classification.pptx
Cluster analysis
Unit - 3 (chap3_basic_classification).pptx
Decision-Tree-.pdf techniques and many more
Classification
Lecture_21_22_Classification_Instance-based Learning
Classification: Basic Concepts and Decision Trees
Decision trees for Classification & Regression.pptx
Lecture 5 Decision tree.pdf
MS SQL SERVER: Microsoft naive bayes algorithm
Data mining by example - building predictive model using microsoft decision t...
Classification Using Decision tree
Ad

More from sqlserver content (20)

PPTX
MS SQL SERVER: Using the data mining tools
PPTX
MS SQL SERVER: SSIS and data mining
PPTX
MS SQL SERVER: Programming sql server data mining
PPTX
MS SQL SERVER: Olap cubes and data mining
PPTX
MS SQL SERVER: Microsoft time series algorithm
PPTX
MS SQL SERVER: Microsoft sequence clustering and association rules
PPTX
MS SQL SERVER: Neural network and logistic regression
PPTX
MS SQL Server: Data mining concepts and dmx
PPTX
MS Sql Server: Reporting models
PPTX
MS Sql Server: Reporting manipulating data
PPTX
MS Sql Server: Reporting introduction
PPTX
MS Sql Server: Reporting basics
PPTX
MS Sql Server: Datamining Introduction
PPTX
MS Sql Server: Business Intelligence
PPTX
MS SQLSERVER:Feeding Data Into Database
PPTX
MS SQLSERVER:Doing Calculations With Functions
PPTX
MS SQLSERVER:Deleting A Database
PPTX
MS SQLSERVER:Customizing Your D Base Design
PPTX
MS SQLSERVER:Creating Views
PPTX
MS SQLSERVER:Creating A Database
MS SQL SERVER: Using the data mining tools
MS SQL SERVER: SSIS and data mining
MS SQL SERVER: Programming sql server data mining
MS SQL SERVER: Olap cubes and data mining
MS SQL SERVER: Microsoft time series algorithm
MS SQL SERVER: Microsoft sequence clustering and association rules
MS SQL SERVER: Neural network and logistic regression
MS SQL Server: Data mining concepts and dmx
MS Sql Server: Reporting models
MS Sql Server: Reporting manipulating data
MS Sql Server: Reporting introduction
MS Sql Server: Reporting basics
MS Sql Server: Datamining Introduction
MS Sql Server: Business Intelligence
MS SQLSERVER:Feeding Data Into Database
MS SQLSERVER:Doing Calculations With Functions
MS SQLSERVER:Deleting A Database
MS SQLSERVER:Customizing Your D Base Design
MS SQLSERVER:Creating Views
MS SQLSERVER:Creating A Database

Recently uploaded (20)

PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
A Presentation on Artificial Intelligence
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Cloud computing and distributed systems.
PDF
Empathic Computing: Creating Shared Understanding
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Electronic commerce courselecture one. Pdf
PDF
Encapsulation theory and applications.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Network Security Unit 5.pdf for BCA BBA.
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Review of recent advances in non-invasive hemoglobin estimation
“AI and Expert System Decision Support & Business Intelligence Systems”
Advanced methodologies resolving dimensionality complications for autism neur...
Digital-Transformation-Roadmap-for-Companies.pptx
MYSQL Presentation for SQL database connectivity
A Presentation on Artificial Intelligence
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Cloud computing and distributed systems.
Empathic Computing: Creating Shared Understanding
Mobile App Security Testing_ A Comprehensive Guide.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Reach Out and Touch Someone: Haptics and Empathic Computing
Electronic commerce courselecture one. Pdf
Encapsulation theory and applications.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Network Security Unit 5.pdf for BCA BBA.

MS SQL SERVER: Decision trees algorithm

  • 2. OverviewDecision Trees AlgorithmDMX QueriesData Mining usingDecision TreesModel Content for a Decision Trees ModelDecision Tree ParametersDecision Tree Stored Procedures
  • 3. Decision Trees AlgorithmThe Microsoft Decision Trees algorithm is a classification and regression algorithm provided by Microsoft SQL Server Analysis Services for use in predictive modeling of both discrete and continuous attributes.For discrete attributes, the algorithm makes predictions based on the relationships between input columns in a dataset. It uses the values, known as states, of those columns to predict the states of a column that you designate as predictable. For example, in a scenario to predict which customers are likely to purchase a motor bike, if nine out of ten younger customers buy a motor bike, but only two out of ten older customers do so, the algorithm infers that age is a good predictor of the bike purchase.
  • 4. Decision Trees AlgorithmFor continuous attributes, the algorithm uses linear regression to determine where a decision tree splits.If more than one column is set to predictable, or if the input data contains a nested table that is set to predictable, the algorithm builds a separate decision tree for each predictable column.
  • 5. DMX QueriesLets understand how to use DMX queries by creating a simple tree model based on the School Plans data set.The table School Plans contains data about 500,000 high school students, including Parent Support, Parent Income, Sex, IQ, and whether or not the student plans to attend School. using the Decision Trees algorithm, you can create a mining model, predicting the School Plans attribute based on the four other attributes.
  • 6. DMX Queries(Classification)CREATE MINING STRUCTURE SchoolPlans(ID LONG KEY,Sex TEXT DISCRETE,ParentIncome LONG CONTINUOUS,IQ LONG CONTINUOUS,ParentSupport TEXT DISCRETE,SchoolPlans TEXT DISCRETE)WITH HOLDOUT (10 PERCENT)ALTER MINING STRUCTURE SchoolPlansADD MINING MODEL SchoolPlan( ID,Sex,ParentIncome,IQ,ParentSupport,SchoolPlans PREDICT)USING Microsoft Decision TreesModel Creation:
  • 7. DMX Queries(Classification)INSERT INTO SchoolPlans (ID, Sex, IQ, ParentSupport, ParentIncome, SchoolPlans)OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome, SchoolPlans FROM SchoolPlans’)Training the SchoolPlan Model
  • 8. DMX Queries(Classification)SELECT t.ID, SchoolPlans.SchoolPlans, PredictProbability(SchoolPlans) AS [Probability]FROM SchoolPlans PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome FROM NewStudents’) AS tON SchoolPlans.ParentIncome= t.ParentIncome ANDSchoolPlans.IQ = t.IQ ANDSchoolPlans.Sex= t.Sex ANDSchoolPlans.ParentSupport= t.ParentSupportPredicting the SchoolPlan for a new student.This query returns ID, SchoolPlans, andProbability.
  • 9. DMX Queries(Classification)SELECT t.ID, PredictHistogram(SchoolPlans) AS [SchoolPlans] FROM SchoolPlans PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome FROM NewStudents’) AS t ON SchoolPlans.ParentIncome= t.ParentIncome ANDSchoolPlans.IQ = t.IQ ANDSchoolPlans.Sex= t.Sex ANDSchoolPlans.ParentSupport= t.ParentSupportnQuery returns the histogram of the SchoolPlans predictions in the form of a nested table.Result of this query is shown in the next slide.
  • 11. DMX Queries (Regression)Regression means predicting continuous variables using linear regression formulas based on regressors that you specify. ALTER MINING STRUCTURE SchoolPlans ADD MINING MODEL ParentIncome ( ID, Gender, ParentIncome PREDICT, IQ REGRESSOR, ParentEncouragement, SchoolPlans ) USING Microsoft Decision Trees INSERT INTO ParentIncomeCreating and training a regression model toPredict ParentIncome using IQ, Sex, ParentSupport, and SchoolPlans. IQ is used as a regressor.
  • 12. DMX Queries (Regression) SELECT t.StudentID, ParentIncome.ParentIncome, PredictStdev(ParentIncome) AS DeviationFROM ParentIncome PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, SchoolPlans FROM NewStudents’) AS t ON ParentIncome.SchoolPlans = t. SchoolPlans AND ParentIncome.IQ = t.IQ AND ParentIncome.Sex = t.Sex AND ParentIncome.ParentSupport = t. ParentSupportContinuous prediction using a decision tree to predict the ParentIncome for new students and the estimated standard deviation for each prediction.
  • 13. DMX Queries(Association)CREATE MINING MODEL DanceAssociation ( ID LONG KEY, Gender TEXT DISCRETE, MaritalStatus TEXT DISCRETE, Shows TABLE PREDICT ( Show TEXT KEY ) ) USING Microsoft Decision TreesAn example of an associative trees model built on a Dances data set.
  • 14. Each Show is considered an attribute with binary states— existing or missing.
  • 15. DMX Queries(Association) INSERT INTO DanceAssociation ( ID, Gender, MaritalStatus, Shows (SKIP, Show)) SHAPE { OPENQUERY (DanceSurvey, ‘SELECT ID, Gender, [Marital Status] FROM Customers ORDER BY ID’) } APPEND ( {OPENQUERY (DanceSurvey, ‘SELECT ID, Show FROM Shows ORDER BY ID’)} RELATE ID TO ID )AS ShowsTraining an associative trees modelBecause the model contains a nested table, the training statement involves the Shape statement.
  • 16. DMX Queries(Association)Training an associative trees modelSuppose that there is a married male customer who likes the Michael Jackson’s Show.This query returns the other five Shows this customer is most likely to find appealing.SELECT t.ID,Predict(DanceAssociation.Shows,5, $AdjustedProbability)AS RecommendationFROMDanceAssociationNATURAL PREDICTION JOIN(SELECT ‘101’ AS ID, ‘Male’ AS Gender,‘Married’ AS MaritalStatus,(SELECT ‘Michael Jackson’ AS Show)AS Shows) AS t
  • 17. Data Mining usingDecision TreesThe most common data mining task for a decision tree is classification i.e. determining whether or not a set of data belongs to a specific type, or class.The principal idea of a decision tree is to split your data recursively into subsets. The process of evaluating all inputs is then repeated on each subset.When this recursive process is completed, a decision tree is formed.
  • 18. Data Mining usingDecision TreesDecision trees offer several advantages over other data mining algorithms.Trees are quick to build and easy to interpret. Each node in the tree is clearly labeled in terms of the input attributes, and each path formed from the root to a leaf forms a rule about your target variable. Prediction based on decision trees is efficient.
  • 19. Model Content for a Decision Trees ModelThe top level is the model node. The children of the model node are its tree root nodes. If a tree model contains a single tree, there is only one node in the second level. The nodes of the other levels are either intermediate nodes (or leaf nodes) of the tree. The probabilities of each predictable attribute state are stored in the distribution row sets.
  • 20. Model Content for a Decision Trees Model
  • 21. Interpreting the Mining Model Content A decision trees model has a single parent node that represents the model and its metadata underneath which are independent trees that represent the predictable attributes that you select. For example, if you set up your decision tree model to predict whether customers will purchase something, and provide inputs for gender and income, the model would create a single tree for the purchasing attribute, with many branches that divide on conditions related to gender and income.However, if you then add a separate predictable attribute for participation in a customer rewards program, the algorithm will create two separate trees under the parent node. One tree contains the analysis for purchasing, and another tree contains the analysis for the customer rewards program.
  • 22. Decision Tree ParametersThe tree growth, tree shape, and the input output attribute settings are controlled using these parameters .You can fine-tune your model’s accuracy by adjusting these parameter settings.
  • 23. Decision Tree ParametersCOMPLEXITY _PENALTYis a floating point number with the range [0,1] which controls how much penalty the algorithm applies to complex trees.When the value of this parameter is set close to 0, there is a lower penalty for the tree growth, and you may see large trees.When its value is set close to 1, the tree growth is penalized heavily, and the resulting trees are relatively small.If there are fewer than 10 input attributes, the value is set to 0.5.if there are more than 100 attributes, the value is set to 0.99. If you have between 10 and 100 input attributes, the value is set to 0.9.
  • 24. Decision Tree ParametersMINIMUM _SUPPORT is used to specify the minimum size of each node in a tree.For example, if this value is set to 25, any split that would produce a child node containing less than 25 cases is not accepted. The default value for MINIMUM_SUPPORT is 10.SCORE _METHOD is used to specify the method for determining a split score during tree growth. The three possible values for SCORE METHOD are:SCORE METHOD = 1 use an entropy score for tree growth.SCORE METHOD = 2  use the Bayesian with K2 Prior method, meaning it will add a constant for each state of the predictable attribute in a tree node, regardless of the node level of the tree.SCORE METHOD = 3  use the Bayesian Dirichlet Equivalent with Uniform Prior (BDEU) method.
  • 25. Decision Tree ParametersSPLIT METHOD is used to specify the tree shape(binary or bushy)SPLIT METHOD = 1 means the tree is split only in a binary way. SPLIT METHOD = 2 indicates that the tree should always split completely on each attribute. SPLIT METHOD = 3, the default method the decision tree will automatically choose the better of the previous two methods. MAXIMUM_INPUT_ATTRIBUTES is a threshold parameter for feature selection.When the number of input attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant input attributes.
  • 26. Decision Tree ParametersMAXIMUM _0UTPUT_ATTRIBUTESis another threshold parameter for feature selection.When the number of predictable attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant attributes. FORCE_REGRESSOR allows you to override the regressor selection logic in the decision tree algorithm and always use a specified regressor in the regression equations in regression trees. This parameter is typically used in price elasticity models. For example, suppose that you have a model to predict Sales using Price and other attributes. If you specify FORCE REGESSOR = Price, you get regression formulas using Price and other significant attributes for each node of the tree.
  • 27. Decision Tree Stored ProceduresSet of system-stored procedures used in the Decision Tree viewer are:CALL System.GetTreeScores(‘MovieAssociation’)
  • 30. CALL System.DTAddNodes(‘MovieAssociation’,‘36;34’, ‘99;282;20;261;26;201;33;269;30;187’)
  • 31. Decision Tree Stored ProceduresGetTreeScores is the procedure that the Decision Tree viewer uses to populate the drop-down tree selector. It takes a name of a decision tree model as a parameter and returns a table containing a row for every tree on the model and the following three columns:ATTRIBUTE_NAMEis the name of the tree.NODE_UNIQUE_NAME is the content node representing the root of the tree.MSOLAP_NODE_SCORE is a number representing the amount of information(number of nodes) in the tree.
  • 32. Decision Tree Stored ProceduresDTGetNodes is used by the decision tree Dependency Network viewer when you click the Add Nodes button. It returns a row for all potential nodes in the dependency network and has the following two columns:NODE UNIQUE NAME1 is an identifier that is unique for the dependency network.NODE CAPTION is the name of the node.
  • 33. Decision Tree Stored ProceduresThe DTGetNodeGraph procedure returns four columns:When a row has NODE TYPE = 1, it contains a description of the nodes and the remaining three columns have the following interpretation:NODE UNIQUE NAME1 contains a unique identifier for the node.NODE UNIQUE NAME2 contains the node caption.When a row has NODE TYPE = 2, it represents a directed edge in the graph and the remaining columns have these interpretations: NODE UNIQUE NAME1 contains the node name of the starting point of the edge.NODE UNIQUE NAME2 contains the node name of the ending point of the edge.MSOLAP NODE SCORE contains the relative weight of the edge.
  • 34. Decision Tree Stored ProceduresDTAddNodesallows you to add new nodes to an existing graph. It takes a model name, a semicolon-separated list of the IDs of nodes you want to add to the graph, and a semicolon-separated list of the IDs of nodes already in the graph. This procedure returns a table similar to the NODE TYPE = 2 section of DTGetNodeGraph, but without the NODE TYPE column. The rows in the result set contain all the edges between the added nodes, and all of the edges between the added nodes and the nodes specified as already in the graph.
  • 35. SummaryDecision Trees Algorithm OverviewDMX QueriesData Mining usingDecision TreesInterpreting the Model Content for a Decision Trees ModelDecision Tree ParametersDecision Tree Stored Procedures
  • 36. Visit more self help tutorialsPick a tutorial of your choice and browse through it at your own pace.The tutorials section is free, self-guiding and will not involve any additional support.Visit us at www.dataminingtools.net