CVE Alert: CVE-2025-15109 - jackq - XCMS - https://www.redpacketsecurity.com/cve-alert-cve-2025-15109-jackq-xcms/
#OSINT #ThreatIntel #CyberSecurity #cve-2025-15109 #jackq #xcms
CVE Alert: CVE-2025-15109 - jackq - XCMS - https://www.redpacketsecurity.com/cve-alert-cve-2025-15109-jackq-xcms/
#OSINT #ThreatIntel #CyberSecurity #cve-2025-15109 #jackq #xcms
Since the 1970s, data processing units and computers have been sold alongside mass spectrometers. Over the past two decades, the concept of computational metabolomics has gained traction in scientific literature, with software capabilities evolving dramatically. These advancements have not only accelerated existing tasks but have also enabled researchers to address entirely new challenges. In the early days, manual inspection of chromatograms was standard practice. A major turning point for metabolite profiling came with xcms, one of the first open-source tools to implement a complete workflow from feature detection to univariate statistics. Over time, additional algorithms for feature detection, grouping, ion annotation, and many more, were developed. Research groups from all over the world have since contributed to the ever-expanding metaRbolomics ecosystem, with new tools and extensions available via CRAN and Bioconductor. With robust metabolite profiling workflows in place, the next challenge was metabolite identification and annotation. What once required days or weeks to identify a single metabolite can now be accelerated through computational approaches, enabling the annotation of (almost) all MS/MS spectra, faster than the measurements themselves. So, what's left to be done ? An increasing challenge was, and still is, interpreting the data in a biological and biomedical context. While the above workflow steps have been successfully automated, this final step still relies heavily on human expertise, intuition, and domain knowledge. Let's see what comes next. Social media: Don't panic! is also the recommendation when it comes to computational metabolomics, which does not only get stuff done faster, but opens new avenues to process and interpret metabolomics data.
We just added a #reproducible example for large-scale #metabolomics data preprocessing using the @bioconductor #xcms package to @phili 's #Metabonaut tutorials resource! ๐ช
https://rformassspectrometry.github.io/Metabonaut/articles/large-scale-analysis.html
runs also on a standard ๐ป - thanks to the new memory-saving processing in #xcms
using the largest (n=4000) data set from MetaboLights
Updates from our continuous development of the #xcms #rstats
#metabolomics package:
๐ retention time alignment against external data set
๐ chromatographic peak quality metrics
๐ preformance improvements
All available in current version in @bioconductor release 3.21 ๐
Up next: memory-saving analysis of very large data sets!
Sharing our ๐ Metabonaut resource:
A collection of comprehensive tutorials for LC-MS/MS #metabolomics data analysis in
by @phili et al.
Learn raw data processing, annotation & stats with #xcms, #RforMassSpectrometry & @bioconductor - all reproducible & community-driven! #rstats
And @phili with her poster on a complete end-to-end workflow for untargeted #metabolomics data analysis in #rstats with @bioconductor and #RforMassSpectrometry #xcms etc
#MetSoc2024 poster # 1008
๐ https://doi.org/10.5281/zenodo.11370612 ๐
Despite untargeted LC-MS/MS data being a powerful approach for large-scale metabolomics analysis, a significant challenge in the field lies in the reproducible and efficient analysis of such data. The power of R-based analysis workflows lies in their high customizability and adaptability to specific instrumental and experimental setups, but while various specialized packages exist for individual analysis steps, their seamless integration and application to large cohort datasets remains elusive. Addressing this gap, we present a comprehensible end-to-end R workflow that leverages xcms and packages of the RforMassSpectrometry environment to encompass all aspects of pre-processing and downstream analyses for LC-MS/MS datasets in a reproducible manner. This poster/presentation delineates a step-by-step analysis of an example untargeted metabolomics dataset tailored to quantify the small polar metabolome in human plasma samples and aimed to identify differences between individuals suffering from cardiovascular disease and healthy controls. The objective of the workflow is to meticulously detail each step, from the preprocessing of raw mzML files to the annotation of differentially abundant ions between the two groups. Our workflow seamlessly integrates Bioconductor packages, offering adaptability to diverse study designs and analysis requirements. This workflow facilitates preprocessing, feature detection, alignment, normalization, statistical analysis, and annotation within a unified framework, thereby enhancing the efficiency of metabolomic investigations. We also discuss alternative approaches to accommodate various datasets and goals, while emphasizing proper quality management for LC-MS data analysis.
On my way ๐ to Vienna ๐ฆ๐นfor the MassSpec Forum 2024 https://bit.ly/3T3LFEm !
Will have a workshop on how to handle and preprocess LC-MS data with our @bioconductor
packages #Spectra and #xcms !
Yes, we're still working on improving the #xcms @bioconductor
package for LC-MS #metabolomics data preprocessing!
This time: filter features based on quality criteria from @davidbroadhurst et al. (https://doi.org/10.1007/s11306-018-1367-3).
great contribution from @phili !
More info: https://bit.ly/3UqmfSx
Next on the list: mzTab-M export.
Background Quality assurance (QA) and quality control (QC) are two quality management processes that are integral to the success of metabolomics including their application for the acquisition of high quality data in any high-throughput analytical chemistry laboratory. QA defines all the planned and systematic activities implemented before samples are collected, to provide confidence that a subsequent analytical process will fulfil predetermined requirements for quality. QC can be defined as the operational techniques and activities used to measure and report these quality requirements after data acquisition. Aim of review This tutorial review will guide the reader through the use of system suitability and QC samples, why these samples should be applied and how the quality of data can be reported. Key scientific concepts of review System suitability samples are applied to assess the operation and lack of contamination of the analytical platform prior to sample analysis. Isotopically-labelled internal standards are applied to assess system stability for each sample analysed. Pooled QC samples are applied to condition the analytical platform, perform intra-study reproducibility measurements (QC) and to correct mathematically for systematic errors. Standard reference materials and long-term reference QC samples are applied for inter-study and inter-laboratory assessment of data.
#xcms version 4 is here!
Now we use the full power of the #Bioconductor #Spectra
package and the #RforMassSpectrometry package ecosystem ๐ฅณ
#MassSpectrometry #Metabolomics
Available through @bioconductor release 3.18
A small tutorial (incl
) showcasing this update: https://bit.ly/3QxVxFf
Tutorials and workshops describing LC-MS(/MS) data pre-processing and analysis using the xcms Bioconductor package - GitHub - jorainer/xcmsTutorials: Tutorials and workshops describing LC-MS(/MS) d...
My view during my workshop on #LC-MS #Metabolomics data analysis with the @bioconductor #xcms
package during the Munich Metabolomics Meeting ๐ณ
Reason: HDMI connection was only available next to the audio system at the back of the lecture hall ๐