What Is a Decision Tree?

A decision tree is a supervised machine learning algorithm that recursively splits data into branches based on feature values to predict an outcome.

Written by Anthony Corbo
Decision tree image of a wooden cutout tree with small blocks featuring human silhouettes.
Image: Shutterstock / Built In
Brand Studio Logo
UPDATED BY
Brennan Whitfield | Jun 03, 2025
REVIEWED BY
Summary: A decision tree is a supervised machine learning algorithm that breaks down data into variables to guide decision-making. It's used in business to map product plans, choose suppliers, reduce churn and more, though it can become complex with many branches and variables.

A decision tree is a supervised machine learning algorithm used for classification or regression tasks. It combines multiple data points and weighs degrees of uncertainty to determine the best approach to complex decision-making. Decision trees work by recursively splitting data into subsets based on feature values, forming a tree-like structure where each leaf node represents a prediction.

This process allows companies to create product roadmaps, choose between suppliers, reduce churn, determine areas to cut costs and more.

 

Decision Tree Classification Clearly Explained! | Video: Normalized Nerd

What Are Decision Trees Used For?

We typically use decision trees to create informed opinions that facilitate better decision making.

Decision trees allow us to break down information into multiple variables to arrive at a singular best decision to a problem. 

Decision Tree Components

  • Root Node: Represents the initial feature tested.
  • Internal Nodes: Represent feature-based tests that split the data.
  • Branches: Indicate outcomes of feature-based tests.
  • Leaf Nodes (Terminal Nodes): Hold the final decision or prediction.

Decision trees must contain all possibilities clearly outlined in a structured manner in order to be effective, but they must also present multiple possibilities for data scientists to make collaborative decisions and optimize business growth.

RelatedWhat Is Decision Tree Classification?

 

Decision Trees vs. Random Forest: What’s the Difference?

Decision trees incorporate multiple variables to determine potential outcomes that ultimately allow us to make a single, best decision. Random forest algorithms go a step further and do not rely on a single decision. 

Unlike a single decision tree, a random forest trains multiple trees on different data and feature subsets and aggregates their outputs. This ensemble method reduces overfitting and improves prediction accuracy, but it sacrifices interpretability. By combining multiple models, random forests generate more robust and generalizable predictions than individual trees.

When to Use Decision Tree Over Random Forest

Random forest is best when multiple pieces of data come from a complex data set and must be analyzed to generate a final output. We effectively sacrifice easy interpretability  to determine the most recurring output when we weight virtually limitless inputs against each other. Decision trees are best used when working with simpler data sets due to easier interpretability and simpler model training.

 

Disadvantages of Decision Trees

The main disadvantages of decision trees lie in their tendency to become overly complex when trying to maximize information gain at each split, leading to overfitting and poor generalization

Decision trees are used to determine logical solutions to complex problems but are ineffective without containing all possible outcomes to a possible decision. Accordingly, decision trees have a tendency to become loaded with several branches containing many variables, often branching excessively, which can lead to an unwieldy model that’s hard to interpret. This can lead to an overwhelming amount of data and more confusion than clarity when making decisions.

Decision trees may also lead to issues when using qualitative variables, those that aren’t numerical in value but rather fit into categories, to make decisions. Decision trees can handle categorical data, but variables with high cardinality (many unique categories) may cause excessive branching and model complexity.”

Frequently Asked Questions

A decision tree is a supervised machine learning algorithm used to make informed decisions by breaking down complex problems into a series of variables and potential outcomes. Decision trees are applied in areas like product planning, supplier selection, churn reduction and cost optimization.

The main components of a decision tree include:

  • Root Node: Represents the initial feature tested.
  • Internal Nodes: Represent feature-based tests that split the data.
  • Branches: Indicate outcomes of feature-based tests.
  • Leaf Nodes (Terminal Nodes): Hold the final decision or prediction.

A decision tree produces a single outcome based on the data, while a random forest aggregates the outputs of multiple decision trees to make a majority-based prediction, often improving accuracy at the expense of interpretability.

Explore Job Matches.