We are excited to host the 2nd edition of the Gaze Meets ML workshop in conjunction with NeurIPS 2023 in December, 2023. The workshop will take place in-person at New Orleans! Looking forward to seeing you in person!

This year we are continuing to have a great lineup of speakers. We would like to thank our sponsors for their support. If you are interested in sponsoring, please find more information here.

For questions and further information, please reach out to gaze.neurips@gmail.com.

Follow us on Twitter @Gaze_Meets_ML @Gaze_Meets_ML

About

Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

With the emergence of immersive technologies, now more than any time, there is a need for experts from various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards bridging human cognition and AI in machine learning research and development. The goal of this workshop is to bring together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning.

For past Gaze Meets ML workshop, visit Gaze Meets ML 2022.

Workshop Schedule

Webpage: https://neurips.cc/virtual/2023/workshop/66537

All times are in Central Time

Sat 7:30 a.m. - 8:15 a.m. Meet and Greet and Getting started(Break)

Sat 8:15 a.m. - 8:30 a.m.
Opening Remarks (15 mins) Organizers (Opening Remarks)

Sat 8:30 a.m. - 9:15 a.m.
Invited Talk TBD Bertram E. Shi

Sat 9:15 a.m. - 10:00 a.m.
Invited Talk: Accelerating human attention research via smartphones Vidhya Navalpakkam

Sat 10:00 a.m. - 10:30 a.m.
Coffee Break (Coffee Break and Poster Walk-Around)

Sat 10:30 a.m. - 10:45 a.m.
Interaction-aware Dynamic 3D Gaze Estimation in Videos (Spotlight) Chenyi Kuang · Jeffrey O Kephart · Qiang Ji

Sat 10:45 a.m. - 11:00 a.m.
SuperVision: Self-Supervised Super-Resolution for Appearance-Based Gaze Estimation(Spotlight) Galen O'Shea · Majid Komeili

Sat 11:00 a.m. - 11:15 a.m.
EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features (Spotlight) Vasudev Singh · Chaitanya Langde · Sourav Lakhotia · Vignesh Kannan · Shuaib Ahmed

Sat 11:15 a.m. - 11:30 a.m.
Planning by Active Sensing(Spotlight) Kaushik Lakshminarasimhan · Seren Zhu · Dora Angelaki

Sat 11:30 a.m. - 11:45 a.m.
Crafting Good Views of Medical Images for Contrastive Learning via Expert-level Visual Attention(Spotlight) Sheng Wang · Zihao Zhao · Lichi Zhang · Dinggang Shen · Qian Wang

Sat 11:45 a.m. - 1:30 p.m.
Lunch (Lunch and Poster Walk-Around)

Sat 1:30 p.m. - 2:15 p.m.
Invited Talk: Gazing into the crystal ball: Predicting future gaze events in virtual reality Tim Rolff

Sat 2:15 p.m. - 2:30 p.m.
Memory-Based Sequential Attention(Spotlight) Jason Stock · Charles Anderson

Sat 2:30 p.m. - 2:45 p.m.
An Attention-based Predictive Agent for Handwritten Numeral/Alphabet Recognition via Generation(Spotlight) Bonny Banerjee · Murchana Baruah

Sat 2:45 p.m. - 4:30 p.m.
Breakout session (Discussion within onsite small groups on preselected themes and Coffee Break)

Sat 4:30 p.m. - 4:45 p.m.
Wrap Up - Sponsors Talk and Award Ceremony

Sat 4:45 p.m. - 5:00 p.m.
Wrap Up - Closing Remarks (Closing)

Call for Papers

We welcome submissions that present aspects of eye gaze in regard to cognitive science, psychophysiology, and computer science or propose methods for integrating eye gaze into machine learning. We are also looking for applications from radiology, AR/VR, autonomous driving, etc. that introduce methods and models utilizing eye gaze technology in their respective domains.

Topics of interest include but are not limited to the following:

  • Understanding the neuroscience of eye-gaze and perception
  • State of the art in incorporating machine learning and eye-tracking
  • Annotation and ML supervision with eye-gaze
  • Attention mechanisms and their correlation with eye-gaze
  • Methods for gaze estimation and prediction using machine learning
  • Unsupervised ML using eye gaze information for feature importance/selection
  • Understanding human intention and goal inference
  • Using saccadic vision for ML applications
  • Use of gaze for human-AI interaction and agent coordination in multi-agent environments
  • Eye gaze used for AI, e.g., NLP, Computer Vision, RL, Explainable AI, Embodied AI, Trustworthy AI
  • Ethics of Eye Gaze in AI
  • Gaze applications in cognitive psychology, radiology, neuroscience, AR/VR, autonomous cars, privacy, etc.

Important Dates

    Submission due: 27th September 2023 8th October 2023
    Reviewing starts: 30th September 2023 9th October 2023
    Reviewing ends: 16th October 2023 23rd October 2023
    Notification of acceptance: 27th October 2023
    SlideLive presentation pre-recording upload for NeurIPS (hard deadline): 10th November 2023
    Camera ready paper: 17th November 2023 20th November 2023
    Workshop Date: 16th December 2023

Submissions

The workshop will feature two tracks for submission: a full, archival proceedings track with accepted papers published in the Proceedings for Machine Learning Research (PMLR); and a non-archival, extended abstract track. Submissions to either track will undergo the same double-blind peer review. Full proceedings papers can be up to 15 pages and extended abstract papers can be up to 8 pages (both excluding references and appendices). Authors of accepted extended abstracts (non-archival submissions) retain full copyright of their work, and acceptance of such a submission to Gaze Meets ML does not preclude publication of the same material in another archival venue (e.g., journal or conference).

FAQs

For a list of commonly asked questions, please see FAQs

Speakers

Tim Roff Vidhya Navalpakkam Bertram E. Shi, Ph.D.
University of Hamburg Google Research HKUST

Organizers

Amarachi Mbakwe, MS Joy Tzung-yu Wu, MD, MPH. Dario Zanca, Ph.D.
Virginia Tech Stanford, IBM Research Friedrich-Alexander-Universität Erlangen-Nürnberg
Elizabeth A. Krupinski, PhD FSPIE, FSIIM, FATA, FAIMBE Satyananda Kashyap, Ph.D. Alexandros Karargyris, Ph.D.
Emory University IBM Research MLCommons

Program Committee

  • Aakash Bansal (Notre Dame)
  • Akshita Singh (Dana-Farber Cancer Institute)
  • Benedikt W. Hosp (University of Tübingen)
  • Brendan David-John (Virginia Tech)
  • Cihan Topal (Istanbul Teknik Üniversitesi)
  • Christian Chukwudi Mathew (Virginia Tech)
  • Daniel Gruhl (IBM Research)
  • Efe Bozkir (University of Tübingen)
  • Ehsan Degan (IBM Research)
  • Florian Strohm (University of Stuttgart)
  • G Anthony Reina (Resilience)
  • Henning Müller (HES-SO Valais)
  • Jason Li (MIT)
  • Julia Alekseenko (IHU Strasbourg)
  • Junwen Wu (Intel)
  • Kamran Binaee (Magic Leap)
  • Ken C. L. Wong (IBM Research)
  • Leo Schwinn (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • Mansi Sharma (Intel)
  • Maria Xenochristou (Stanford University)
  • Matteo Tiezzi (University of Siena)
  • Mehdi Moradi (Google)
  • Neerav Karani (MIT CSAIL)
  • Niharika Shimona D'Souza (IBM Research)
  • Nishant Rai (Stanford University)
  • Prashant Shah (Intel)
  • Ricardo Bigolin Lanfredi (NIH)
  • Sameer Antani (NIH)
  • Sema Candemir (Eskişehir Technical University)
  • Sivarama Krishnan Rajaraman (NIH)
  • Sokratis Makrogiannis (Delaware State University)
  • Spyridon Bakas (University of Pennsylvania)
  • Swati Jindal (University of California Santa Cruz)
  • Szilard Vajda (Central Washington University)
  • Thomas Altstidl (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • Toheeb Balogun (University of California San Diego)
  • Zhiyun Xue (NIH)

Endorsements & Acknowledgements

We are a MICCAI-endorsed event: