IBM SPSS Decision Trees

This module features highly visual classification and decision trees. These trees enable you to present categorical results in an intuitive manner, so you can more clearly explain categorical analysis to non-technical audiences.

IBM SPSS Decision Trees enables you to explore results and visually determine how your model flows. This helps you find specific subgroups and relationships that you might not uncover using more traditional statistics.  Use IBM SPSS Decision Trees if you need to identify groups and sub-groups. Applications include:

Operating systems supported: Windows, Mac, Linux

The IBM SPSS Decision Trees procedure creates a tree-based classification model. Decision Trees can be used as predictive models to predict the values of a dependent (target) variable based on values of independent (predictor) variables. This approach is often used as an alternative to methods such as Logistic Regression.

Because the Decision Trees module is frequently used to correctly categorise cases into a target group, it may be applied in segmentation and profiling applications where the analysts wish to describe customers who are more likely to be more dissatisfied than others. It can also be used to describe cluster membership where the target field is the resultant cluster variable of an SPSS cluster analysis.

Decision Trees uses four algorithms as growing methods:

  • CHAID (Chi Square Automatic Interaction Detector)
  • Exhaustive CHAID
  • CRT (Classification and Regression Tree)
  • QUEST (Quick Unbiased Efficient Statistical Tree)