Apart from #Wasserstein Distance (#EMD), other #metrics also play an important role in #MachineLearning tasks such as #clustering, #classification, and #InformationRetrieval. In this tutorial, you can find a discussion of five commonly used metrics: EMD, #KullbackLeiblerDivergence (KL Divergence), #JensenShannonDivergence (JS Divergence), #TotalVariationDistance (TV Distance), and #BhattacharyyaDistance.
🌎 https://www.fabriziomusacchio.com/blog/2023-07-28-probability_density_metrics/
Probability distance metrics in machine learning
Probabilistic distance metrics play a crucial role in a broad range of machine learning tasks, including clustering, classification, and information retrieval. The choice of metric is often determined by the specific requirements of the task at hand, with each having unique strengths and characteristics. In this post, we discuss five commonly used metrics: the Wasserstein Distance, the Kullback-Leibler Divergence (KL Divergence), the Jensen-Shannon Divergence (JS Divergence), the Total Variation Distance (TV Distance), and the Bhattacharyya Distance.