Reinforcement Learning with Human Feedback (RLFH) ensures your staff are the ones who make an ultimate determination of how to handle abnormal provider behavior. If there is a discrepancy between the systems recommendation and the action your team takes, then that data is used to retrain the algorithm.