This document discusses a machine learning approach to automatically track fiducial markers in videos of rodents feeding. The goal is to more efficiently analyze feeding behavior and dynamics. The approach uses an object detector to identify the head and markers in each frame. Kalman filters and tracking algorithms associate markers between frames. Corresponding tracks from two camera views are matched. Finally, 2D marker positions are converted to 3D coordinates. Results show the approach works well but could be improved by handling off-screen and occluded markers better. Next steps include tuning the models and exploring other tracking and assignment methods.
Related topics: