Backticks in Julia to denote columns containing spaces?
☑️ Backticks in Julia
(Coming to Tidier.jl as soon as I write some docs to show how this works)

| CarDS Lab @ Yale | https://www.cards-lab.org |
| Lab Mastodon | https://med-mastodon.com/@cards_lab |
Backticks in Julia to denote columns containing spaces?
☑️ Backticks in Julia
(Coming to Tidier.jl as soon as I write some docs to show how this works)
There are horror stories on here about automatic rejection of grants because they had some URL
What to do w URLs in letters - say, from a collaborator who has http://something.yale.edu on their letterhead?
Is that ok? @[email protected] @[email protected] @[email protected]
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-20-174.html
Amazing technology on the horizon - Wi-Fi signals envelop us & can be used to determine human pose w #deeplearning
Could help in preventing/monitoring falls in older adults & those with needs.
Also better privacy over direct video recording
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
Happening now:
Dr. @[email protected] presenting at @[email protected] grand rounds “Leveraging Digital Data and Digital strategies for #Hypertension Control and Cardiovascular #Prevention”
Highlighting work on improving equity in CVD Prevention
What a scary sight to see a player collapse on live TV from what seemed like a cardiac arrest.
Prayers for Damar Hamlin - if it was Commotio Cordis - a fatal arrhythmia triggered by chest trauma at a specific time in cardiac cycle, hopefully timely CPR will improve outcome
"OneFormer: One Transformer to Rule Universal Image Segmentation. (arXiv:2211.06220v2 [cs.CV] UPDATED)" — A universal image segmentation framework that unifies segmentation with a multi-task train-once design.
Paper: http://arxiv.org/abs/2211.06220
Code: https://github.com/SHI-Labs/OneFormer
#AI #CV #NewPaper #DeepLearning #MachineLearning
<<Find this useful? Please boost so that others can benefit too 🙂>>
Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible. To support further research, we open-source our code and models at https://github.com/SHI-Labs/OneFormer
Oh, wow
https://journa.host/@atrupar/109564424699543808
If your interface let’s you look at a local timeline on another server — Toot! does — @journa.host appears to be chock full of real journalists and can give you all the fresh cut news fix you want
Hello everyone! I am now on the @journa server, so hopefully this becomes my Mast forever home. If you want to get up to speed about why I'm finally getting active here, here's my piece from earlier this week on my Twitter suspension and the fallout from it. https://aaronrupar.substack.com/p/aaron-rupar-twitter-suspension-elon-musk
Wishing everyone a Happy New Year!🎊
May 2023 bring you happiness, success, and just the right amount of chaos to keep things interesting 😃