Evaluation of ML systems by closing the feedback loop
In this tutorial, we will practice selected techniques for evaluating machine learning systems, and then monitoring them in production. It is one of a 3-part series:
- Offline evaluation of ML systems
- Online evaluation of ML systems
- Evaluation of ML systems by closing the feedback loop (this part!)
In this particular section, we will practice evaluation in the online testing stage - when the system is serving real users - by "closing the loop" between production use of the service, and continuous evaluation/monitoring and re-training.
Follow along at Evaluation of ML systems by closing the feedback loop.
This tutorial uses: one m1.medium
VM at KVM@TACC, and one floating IP.
This material is based upon work supported by the National Science Foundation under Grant No. 2230079.
Launching this artifact will open it within Chameleon’s shared Jupyter experiment environment, which is accessible to all Chameleon users with an active allocation.
Download ArchiveDownload an archive containing the files of this artifact.
Download with git
Clone the git repository for this artifact, and checkout the version's commit
git clone https://github.com/teaching-on-testbeds/eval-loop-chi
# cd into the created directory
git checkout cd5f13948f001621c9bd1082b148285f5bbbdefe
Submit feedback through GitHub issues