Wearable inertial measurement units (IMUs) are being used to quantify gait characteristics that are associated with increased fall risk, but the current limitation is the lack of contextual information which would clarify IMU data. Use of wearable video-based cameras would provide a comprehensive understanding of an individual’s habitual fall risk, by adding context to clarify abnormal IMU data. Generally, there is taboo when suggesting the use of wearable cameras to capture real-world video, clinical and patient aprehension due to the ethical and privacy concerns. Accordingly, this perspective proposes that the routine use of wearable cameras could be realised within digital medicine through AI-based deep learning computer vision models to obfuscate/blur/shade sensitive information while preserving hellpful contextual information necessary for a comprehensive patient assessment. Specifically, no person see’s the raw video data to understand the context in the first instance, rather AI interprets the raw video data first to blur sensitive objects to uphold privacy. That may be more routinely achieved than one imagines as contemporary resources exist by leverging a multidisciplinary approach to digital medicine. Here, to showcase/display the potential an examplar model is suggested via off-the-shelf methods to detect, and blur sensitive objects (e.g., people) with an accuracy of 88%. In this perspective the benefit of the proposed approach includes a more comprehensive understanding of an individual's free-living fall risk (from free-living IMU-based gait) without compromising privacy. More generally, the video and AI approach should be used beyond fall risk to better inform the habitual experiences and challenges across a range of clinical cohorts. Medicine is becoming more receptive to wearables as a hepful toolbox, and camera-based devices should be plausible instruments.