Decision Trees

Predicting student performance using multiple features: study hours, lecture attendance, lab participation, assignments, and more.

Student Dataset

Our dataset contains 30 students with 5 features measuring their academic engagement. The goal is to predict whether a student will have Good or Poor performance.

# Hours Studied Lectures (%) Labs Attended Assignments (%) Sleep (hrs) Performance

Explore the Data

Choose any two features to visualize how they relate to student performance. Notice how different feature combinations reveal different patterns.

Good Performance
Poor Performance

Building the Decision Tree

Watch how the algorithm splits the data step by step. Each split is chosen to maximize information gain — creating the purest possible groups.

Feature Space (Hours vs Assignments)

Tree Structure

How it works: The algorithm evaluates every possible split on every feature and picks the one with the highest information gain.

Feature Importance

Not all features are equally useful for prediction. Feature importance shows how much each feature contributes to the decision tree's splits.

Make a Prediction

Enter a student's academic profile to see how the decision tree classifies them.

Hours Studied/Week 10
Lectures Attended (%) 60
Labs Attended 5
Assignments Done (%) 70
Sleep Hours/Night 7
Predicted Performance
Good

Key Takeaways

Interpretable

Decision trees are easy to understand and explain. You can trace exactly why a prediction was made.

Feature Selection

The algorithm automatically identifies which features are most important for prediction.

Overfitting Risk

Too deep trees memorize noise. Use pruning, max depth limits, or ensemble methods like Random Forests.