L’IA ne vous a pas encore remp...
Les abeilles, ces petites créatures fascinantes, pourraient bien révolutionner notre compréhension de l'intelligence artificielle (IA) et de la robotique. Les chercheurs ont mis en lumière la manière dont les abeilles utilisent leurs mouvements en vol pour améliorer l'apprentissage et la reconnaissance visuelle. Ce qui semblait être un simple battement d'ailes pourrait détenir les clés de
Google vient de lancer Doppl, une application innovante permettant d’essayer virtuellement des vêtements grâce à l’intelligence artificielle. Cette technologie promet une expérience immersive et personnalisée pour les utilisateurs désireux de renouveler leur garde-robe en ligne.
I'm excited to share our #ML work where we learn ultrasound video classifiers given a very limited amounts of training data (<15 + videos). We use insights from doctors and a representation learning approach called sparse coding to achieve high performance on two lung data sets. Our system delivers results in <4 sec on an iPad and <8 sec on an iPhone and provides outputs to support human interpretation.
This work is forthcoming in the upcoming #IAAI / #AAAI conference: https://arxiv.org/abs/2212.03282
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.