Dual Representation Learning for Out-of-distribution Detection

Zhilin Zhao, Longbing Cao

Action editor: Matthew Blaschko.

https://openreview.net/forum?id=PHAr3q49h6

#discriminative #deep #classify

Dual Representation Learning for Out-of-distribution Detection

To classify in-distribution samples, deep neural networks explore strongly label-related information and discard weakly label-related information according to the information bottleneck....

OpenReview

New #SurveyCertification:

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Utku Ozbulak, Hyun Jung Lee, Beril Boga et al.

https://openreview.net/forum?id=Ma25S4ludQ

#supervised #discriminative #generative

Know Your Self-supervised Learning: A Survey on Image-based...

Although supervised learning has been highly successful in improving the state-of-the-art in the domain of image-based computer vision in the past, the margin of improvement has diminished...

OpenReview

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Utku Ozbulak, Hyun Jung Lee, Beril Boga et al.

Action editor: Neil Houlsby.

https://openreview.net/forum?id=Ma25S4ludQ

#supervised #discriminative #generative

Know Your Self-supervised Learning: A Survey on Image-based Discriminative Training

https://openreview.net/forum?id=Ma25S4ludQ

#discriminative #supervised #ssl

Know Your Self-supervised Learning: A Survey on Image-based...

Although supervised learning has been highly successful in improving the state-of-the-art in the domain of image-based computer vision in the past, the margin of improvement has diminished...

OpenReview

'ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction', by Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma.

http://jmlr.org/papers/v23/21-0631.html

#convolutions #convolutional #discriminative

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

Has anyone worked on methods for supporting #auditors or governance representatives/ #impactassessments in determining whether the output of an #algorithmicsystems is "acceptable"? Say an #automatedhiringsystem has proven to be #discriminative. Developers can adjust this but only to a degree of 90% "fairness" without losing system "efficiency" . Who is to determine whether that is socially, ethically and morally "acceptable"? And how? @digitalisationofwork