#Article: Deep Learning by John D. Kelleher shared as 'good reading' by #IvankaTrump https://www.instagram.com/ivankatrump
#Article: Deep Learning by John D. Kelleher | #Goodreads https://www.goodreads.com/book/show/44512612-deep-learning
#ArticleSummary: "John D. Kelleher is a Professor of Computer Science and the Academic Leader of the Information, Communication, and Entertainment Research Institute at the Dublin Institute of Technology. Deep learning is an artificial intelligence technology that enables computer vision, speech recognition in mobile phones, machine translation, AI games, driverless cars, and other applications. When we use consumer products from Google, Microsoft, Facebook, Apple, or Baidu, we are often interacting with a deep learning system. Kelleher explains that deep learning enables data-driven decisions by identifying and extracting patterns from large datasets; its ability to learn from complex data makes deep learning ideally suited to take advantage of the rapid growth in big data and computational power. Kelleher also explains some of the basic concepts in deep learning, presents a history of advances in the field, and discusses the current state of the art. He describes the most important deep learning architectures, including autoencoders, recurrent neural networks, and long short-term networks, as well as such recent developments as Generative Adversarial Networks and capsule networks. He also provides a comprehensive introduction to the two fundamental algorithms in deep learning: gradient descent and backpropagation. Finally, Kelleher considers the future of deep learning-major trends, possible developments, and significant challenges."
By #www.smukher2.com #www.smukher2.eu #www.smukher2.co.uk #www.smukher2.org #www.smukher2.net #smukher2 to #Everyone:
Deep learning is a subset of machine learning, which is a type of artificial intelligence (AI) that involves training neural networks to recognize patterns in data. Deep learning algorithms are designed to mimic the way the human brain processes and learns from information. These algorithms can be used in various applications such as image and speech recognition, natural language processing, and autonomous vehicles. After briefly browsing through this great book recommended and kindly shared by #Ivanka, I decided to add it to #fairwissenschaft Steam Books list. I borrowed this and other books by John D. Kelleher from digital libraries such as the Internet Archiveand this review is based on my initial speed reading review. The author has two books published by MIT Press and another listed below. Together, these books will be valuable in today's data-driven world of science, art, education, and research.
Though I appreciate that John Kelleher is fair, i.e. he gives citations and acknowledgements, so he is not a plagerizer. But he is not FAIR (Findable, Accessible, Interoperable, and Reusable) as unfortunately, this book does not provide codes, nor does published recent papers by author where he is major author (first and/or corresponding author). Let me explain,
1) In the 2019 #Frontiers #FrontiersInNeuroscience paper titled "A U-Net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease" by John Kelleher, he states, "Researchers interested in the code and/or model can contact the authors and the data will be made available (either through direct communication or through reference to a public repository)." This is not a valid excuse as data can be anonymized or pseudo data can be provided. In any case, it has been five years, and they have still not "made the data available" as they stated in the paper.
2) Additionally, coding platforms are easily accessible thanks to Google Colab, Amazon AWS, and Anaconda, so excuses such as 'coding is very complicated' are not valid. Sharing codes and data (anonymized or pseudo-data) has no restrictions, so it must be published upfront; otherwise, such educational and research work has no lasting value.
3) The scientific review process continues after publication, where other scientists can reuse the work with proper citation and acknowledgment, verifying its validity. It's not unheard of that a published paper is found to be fake because scientists cherry-picked data or results, or worse, manipulated them. For example, recently a fMRI paper published in #Nature #SpringerNature turned out to be fake when other scientists tried to verify it by using the described method. The practice of not sharing data, codes, or methods, be it dry lab or wet lab, is not in line with the FAIR principles that are essential for transparent and collaborative research.
(post continued in comments)
#Article: Deep Learning by John D. Kelleher | #Goodreads https://www.goodreads.com/book/show/44512612-deep-learning
#ArticleSummary: "John D. Kelleher is a Professor of Computer Science and the Academic Leader of the Information, Communication, and Entertainment Research Institute at the Dublin Institute of Technology. Deep learning is an artificial intelligence technology that enables computer vision, speech recognition in mobile phones, machine translation, AI games, driverless cars, and other applications. When we use consumer products from Google, Microsoft, Facebook, Apple, or Baidu, we are often interacting with a deep learning system. Kelleher explains that deep learning enables data-driven decisions by identifying and extracting patterns from large datasets; its ability to learn from complex data makes deep learning ideally suited to take advantage of the rapid growth in big data and computational power. Kelleher also explains some of the basic concepts in deep learning, presents a history of advances in the field, and discusses the current state of the art. He describes the most important deep learning architectures, including autoencoders, recurrent neural networks, and long short-term networks, as well as such recent developments as Generative Adversarial Networks and capsule networks. He also provides a comprehensive introduction to the two fundamental algorithms in deep learning: gradient descent and backpropagation. Finally, Kelleher considers the future of deep learning-major trends, possible developments, and significant challenges."
By #www.smukher2.com #www.smukher2.eu #www.smukher2.co.uk #www.smukher2.org #www.smukher2.net #smukher2 to #Everyone:
Deep learning is a subset of machine learning, which is a type of artificial intelligence (AI) that involves training neural networks to recognize patterns in data. Deep learning algorithms are designed to mimic the way the human brain processes and learns from information. These algorithms can be used in various applications such as image and speech recognition, natural language processing, and autonomous vehicles. After briefly browsing through this great book recommended and kindly shared by #Ivanka, I decided to add it to #fairwissenschaft Steam Books list. I borrowed this and other books by John D. Kelleher from digital libraries such as the Internet Archiveand this review is based on my initial speed reading review. The author has two books published by MIT Press and another listed below. Together, these books will be valuable in today's data-driven world of science, art, education, and research.
Though I appreciate that John Kelleher is fair, i.e. he gives citations and acknowledgements, so he is not a plagerizer. But he is not FAIR (Findable, Accessible, Interoperable, and Reusable) as unfortunately, this book does not provide codes, nor does published recent papers by author where he is major author (first and/or corresponding author). Let me explain,
1) In the 2019 #Frontiers #FrontiersInNeuroscience paper titled "A U-Net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease" by John Kelleher, he states, "Researchers interested in the code and/or model can contact the authors and the data will be made available (either through direct communication or through reference to a public repository)." This is not a valid excuse as data can be anonymized or pseudo data can be provided. In any case, it has been five years, and they have still not "made the data available" as they stated in the paper.
2) Additionally, coding platforms are easily accessible thanks to Google Colab, Amazon AWS, and Anaconda, so excuses such as 'coding is very complicated' are not valid. Sharing codes and data (anonymized or pseudo-data) has no restrictions, so it must be published upfront; otherwise, such educational and research work has no lasting value.
3) The scientific review process continues after publication, where other scientists can reuse the work with proper citation and acknowledgment, verifying its validity. It's not unheard of that a published paper is found to be fake because scientists cherry-picked data or results, or worse, manipulated them. For example, recently a fMRI paper published in #Nature #SpringerNature turned out to be fake when other scientists tried to verify it by using the described method. The practice of not sharing data, codes, or methods, be it dry lab or wet lab, is not in line with the FAIR principles that are essential for transparent and collaborative research.
(post continued in comments)
