TY - JOUR
T1 - Exploring the Benefits and Applications of Video-Span Selection and Search for Real-Time Support in Sign Language Video Comprehension among ASL Learners
AU - Hassan, Saad
AU - de Lacerda Pataca, Caluã
AU - Al Amin, Akhter
AU - Nourian, Laleh
AU - Navarro, Diego
AU - Lee, Sooyeon
AU - Gordon, Alexis
AU - Watkins, Matthew
AU - Tigwell, Garreth W.
AU - Huenerfauth, Matt
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/9
Y1 - 2024/9
N2 - People learning American Sign Language (ASL) and practicing their comprehension skills will often encounter complex ASL videos that may contain unfamiliar signs. Existing dictionary tools require users to isolate a single unknown sign before initiating a search by selecting linguistic properties or performing the sign in front of a webcam. This process presents challenges in extracting and reproducing unfamiliar signs, disrupting the video-watching experience, and requiring learners to rely on external dictionaries. We explore a technology that allows users to select and view dictionary results for one or more unfamiliar signs while watching a video. We interviewed 14 ASL learners to understand their challenges in understanding ASL videos, strategies for dealing with unfamiliar vocabulary, and expectations for an in situ dictionary system. We then conducted an in-depth analysis with eight learners to examine their interactions with a Wizard-of-Oz prototype during a video comprehension task. Finally, we conducted a comparative study with six additional ASL learners to evaluate the speed, accuracy, and workload benefits of an embedded dictionary-search feature within a video player. Our tool outperformed a baseline in the form of an existing online dictionary across all three metrics. The integration of a search tool and span selection offered advantages for video comprehension. Our findings have implications for designers, computer vision researchers, and sign language educators.
AB - People learning American Sign Language (ASL) and practicing their comprehension skills will often encounter complex ASL videos that may contain unfamiliar signs. Existing dictionary tools require users to isolate a single unknown sign before initiating a search by selecting linguistic properties or performing the sign in front of a webcam. This process presents challenges in extracting and reproducing unfamiliar signs, disrupting the video-watching experience, and requiring learners to rely on external dictionaries. We explore a technology that allows users to select and view dictionary results for one or more unfamiliar signs while watching a video. We interviewed 14 ASL learners to understand their challenges in understanding ASL videos, strategies for dealing with unfamiliar vocabulary, and expectations for an in situ dictionary system. We then conducted an in-depth analysis with eight learners to examine their interactions with a Wizard-of-Oz prototype during a video comprehension task. Finally, we conducted a comparative study with six additional ASL learners to evaluate the speed, accuracy, and workload benefits of an embedded dictionary-search feature within a video player. Our tool outperformed a baseline in the form of an existing online dictionary across all three metrics. The integration of a search tool and span selection offered advantages for video comprehension. Our findings have implications for designers, computer vision researchers, and sign language educators.
UR - https://www.scopus.com/pages/publications/105024210065
UR - https://www.scopus.com/pages/publications/105024210065#tab=citedBy
U2 - 10.1145/3690647
DO - 10.1145/3690647
M3 - Article
AN - SCOPUS:105024210065
SN - 1936-7228
VL - 17
JO - ACM Transactions on Accessible Computing
JF - ACM Transactions on Accessible Computing
IS - 3
M1 - 14
ER -