We are excited to host the 2nd edition of the Gaze Meets ML workshop in conjunction with NeurIPS 2023 in December, 2023. The workshop will take place in-person at New Orleans! Looking forward to seeing you in person!

This year we are continuing to have a great lineup of speakers. We would like to thank our sponsors for their support. If you are interested in sponsoring, please find more information here.

For questions and further information, please reach out to gaze.neurips@gmail.com.

Follow us on Twitter @Gaze_Meets_ML @Gaze_Meets_ML


Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

With the emergence of immersive technologies, now more than any time there is a need for experts of various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards by bridging human cognition and AI in machine learning research and development. The goal of this workshop is to bring together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning.

For past Gaze Meets ML workshop, visit Gaze Meets ML 2022.

Tentative Program

  • Morning session (225 mins):
    • Opening Remarks (15 minutes)
    • Keynote (45 mins)
    • Invited talks (3x30min = 90 mins)
    • Break (15 mins)
    • Papers Presentation (60 mins)
  • Lunch (60 mins):
    • 60 mins with a walk around poster session
  • Afternoon session (175 mins):
    • Invited talks (1x30 = 30 mins)
    • Coffee Break (15 mins)
    • Breakout Session (90 mins)
    • Papers Presentation (60 mins)
    • Closing Remarks (10 mins)

All times are in Central Time

Call for Papers

We welcome submissions that present aspects of eye gaze in regard to cognitive science, psychophysiology, and computer science or propose methods for integrating eye gaze into machine learning. We are also looking for applications from radiology, AR/VR, autonomous driving, etc. that introduce methods and models utilizing eye gaze technology in their respective domains.

Topics of interest include but are not limited to the following:

  • Understanding the neuroscience of eye-gaze and perception
  • State of the art in incorporating machine learning and eye-tracking
  • Annotation and ML supervision with eye-gaze
  • Attention mechanisms and their correlation with eye-gaze
  • Methods for gaze estimation and prediction using machine learning
  • Unsupervised ML using eye gaze information for feature importance/selection
  • Understanding human intention and goal inference
  • Using saccadic vision for ML applications
  • Use of gaze for human-AI interaction and agent coordination in multi-agent environments
  • Eye gaze used for AI, e.g., NLP, Computer Vision, RL, Explainable AI, Embodied AI, Trustworthy AI
  • Ethics of Eye Gaze in AI
  • Gaze applications in cognitive psychology, radiology, neuroscience, AR/VR, autonomous cars, privacy, etc.

Important Dates

    Submission due: 27th September 2023
    Reviewing starts: 30th September 2023
    Reviewing ends: 16th October 2023
    Notification of acceptance: 27th October 2023
    SlideLive presentation pre-recording upload for NeurIPS (hard deadline): 10th November 2023
    Camera ready paper: 17th November 2023
    Workshop Date: 16th December 2023


The workshop will feature two tracks for submission: a full, archival proceedings track with accepted papers published in the Proceedings for Machine Learning Research (PMLR); and a non-archival, extended abstract track. Submissions to either track will undergo the same double-blind peer review. Full proceedings papers can be up to 15 pages and extended abstract papers can be up to 8 pages (both excluding references and appendices). Authors of accepted extended abstracts (non-archival submissions) retain full copyright of their work, and acceptance of such a submission to Gaze Meets ML does not preclude publication of the same material in another archival venue (e.g., journal or conference).


For a list of commonly asked questions, please see FAQs


Amarachi Mbakwe, MS Joy Tzung-yu Wu, MD, MPH. Dario Zanca, Ph.D.
Virginia Tech Stanford, IBM Research Friedrich-Alexander-Universität Erlangen-Nürnberg
Elizabeth A. Krupinski, PhD FSPIE, FSIIM, FATA, FAIMBE Satyananda Kashyap, Ph.D. Alexandros Karargyris, Ph.D.
Emory University IBM Research MLCommons

Program Committee

  • Anna Lisa Gentile (IBM Research)
  • Daniel Gruhl (IBM Research)
  • Ehsan Degan (IBM Research)
  • G Anthony Reina (Resilience)
  • Henning Müller (HES-SO Valais)
  • Hongzhi Wang (IBM Research)
  • Ken C. L. Wong (IBM Research)
  • Maria Xenochristou (Stanford University)
  • Mansi Sharma (Intel)
  • Mehdi Moradi (Google)
  • Niharika Shimona D'Souza (IBM Research)
  • Peter Mattson (Google)
  • Prashant Shah (Intel)
  • Sameer Antani (NIH)
  • Sivarama Krishnan Rajaraman (NIH)
  • Spyridon Bakas (University of Pennsylvania)
  • Szilard Vajda (Central Washington University)
  • Wolfgang Mehringer (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • Leo Schwinn (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • Thomas Altstidl (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • Kai Kohlhoff (Google Research)
  • Matteo Tiezzi (University of Siena)
  • Sema Candemir (Eskişehir Technical University)
  • Zhiyun Xue (NIH)
  • ‪Cihan Topal‬ (Istanbul Teknik Üniversitesi)
  • Aakash Bansal (Notre Dame)
  • Ricardo Bigolin Lanfredi (NIH)
  • Florian Strohm (University of Stuttgart)
  • Efe Bozkir (University of Tübingen)
  • Kamran Binaee (Magic Leap)
  • Junwen Wu (Intel)
  • Nishant Rai(Stanford University)
  • Brendan David-John(Virgina Tech)

Endorsements & Acknowledgements

We are a MICCAI-endorsed event: