At HelioCampus, we go to extra lengths to ensure that our predictive models are as transparent and explainable as possible. Check out this webinar replay to learn how we train multiple models using different input fields, algorithms, and settings to ensure we’re producing predictions that make sense and are useful for answering the question at hand.
We’ve recently implemented a new set of tools that allow us to more easily and effectively compare model performance and assess which input features have the most influence on each student’s score. Used in the aggregate, these tools can also help identify areas of concern that may be addressed at the policy level.
What's Included In This Webinar
- Highlighting the tools our Data Science team uses for model evaluation and explainability
- How we use them internally to check our model results
- What we provide to our clients to help increase understanding and adoption of our predictions.