#Transposons #Retrotransposons #Ty1Copia #VLP #CryoET #Drosophila
https://www.cell.com/cell/fulltext/S0092-8674(25)00159-X
@VE3RWJ We could build an amazing mesh network in there somewhere. That's a lot of contiguous unlicensed bandwidth to have tossed at us. There are countless ways this could be utilized. I'm excited to see how it plays out.
Region #LWL #LudwigslustParchim :
Also kommenden Freitag doch auf das Moped und hoffen, kein Reh, Wildschwein oder Wolf zu erwischen.
Im Einsatz zum Personentransport bei der #VLP : Teure Elektro-Viersitzer mit Stern vorne statt schadstoffarme Diesel mit neun oder mehr Sitzen.
Klimabilanz mit Wette besser.
Metaがルフトとハンザと提携し機内でVR/MR体験提供、Googleが未発表のARグラスを“チラ見せ” ー 週間振り返りXRニュース
https://www.moguravr.com/vr-ar-mr-weekly-2024-05-03/
#moguravr #業界動向 #Google #Meta #Meta_Quest_3 #Project_Starline #VLP #バーチャル_ラーニング_プラットフォーム #ルフトハンザドイツ航空 #レノボ_ジャパン #大日本印刷 #大日本印刷株式会社 #週間振り返りXRニュース #レノボ_ジャパン #ルフトハンザドイツ航空 #週間振り返りXRニュース #Meta #バーチャル_ラーニング_プラットフォーム #大日本印刷株式会社 #VLP #Google #大日本印刷 #Meta_Quest_3 #Project_Starline
大日本印刷とレノボ・ジャパン、教育向けメタバースの展開拡大。教育機関向けの無償試用も予定
https://www.moguravr.com/dnp-and-lenovo-japan-virtual-learning-platform-metaverse/
#moguravr #活用事例 #VLP #バーチャル_ラーニング_プラットフォーム #レノボ_ジャパン #大日本印刷 #レノボ_ジャパン #大日本印刷 #バーチャル_ラーニング_プラットフォーム #VLP
Went back to BLIP (https://arxiv.org/abs/2201.12086) last night. When I first skimmed it, I focused on the part of the paper focused on bootstrapping captions, but the "Multimodal mixture of Encoder-Decoder" architecture is pretty cool.
It uses a structured architecture involving multiple encoder/decoders wherein some parts of the architecture take advantage of others (e.g. using the contrastive loss for hard example mining for the image-text matching loss).
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at https://github.com/salesforce/BLIP.
As a (re-) #introduction (#hashtags): I am a senior ML researcher at Microsoft Research in Cambridge (UK, aka "Original Cambridge") and part of Health Futures.
I work in ML for #health, in particular on #EHR data (especially #ICU time series), and these days am thinking about #biomedical vision-language processing (#vlp) in #radiology.