Pablo Rivas

65 Followers
71 Following
13 Posts
Assistant professor of computer science at Marist University. 🇲🇽 🇺🇸 I do NLP and CV research and care for developing safe and robust AI. I direct the Center for Responsible AI and Governance.
Websitehttps://rivas.ai/
LinkedInhttps://www.linkedin.com/in/docrivas/
Google Scholarhttp://scholar.google.com/citations?user=kYfaCFMAAAAJ
CSEAIhttp://cseai.center/
Ever wondered how generalization bounds in quantum machine learning compare to classical #ML theory? In classical ML, tools like Hoeffding's inequality provide theoretical guarantees on out-of-sample risk based on data and model complexity. But in the noisy quantum era, does noise redefine these bounds? Dive into our latest survey exploring the state of generalization in #QML and the challenges noise introduces.
Read more:
Brier - https://buff.ly/3ZfrVQ0
Paper - https://buff.ly/3B62JUm
Navigating the Noisy Quantum Landscape: A Look at Generalization in Quantum Machine Learning

Quantum Machine Learning (QML) holds the promise of revolutionizing how we process and learn from data, leveraging the unique properties of quantum systems like superposition and entanglement. Yet,…

BAYLOR AI
This Thanksgiving, we're honoring the ML breakthroughs shaping the field—Transformers, BERT, GANs, and more. From NLP to generative AI, these innovations are worth celebrating and using responsibly. Read more: https://buff.ly/3OtI9QH #AI #MachineLearning #Thanksgiving
Giving Thanks for the Pioneering Advances in Machine Learning

This Thanksgiving, as we reflect on what we’re thankful for, let’s celebrate the groundbreaking advances in machine learning that have shaped the field and impacted our lives. From the …

BAYLOR AI
D-ReLU: A breakthrough in robust AI, designed to defend against adversarial attacks while maintaining efficiency and scalability. This research, led by Korn Sooksatra (now at Meta), has implications for high-stakes AI applications. Blog: https://buff.ly/4fC9GeP Full paper: https://buff.ly/3UXzNVi #ResponsibleAI #SafeAI #AdversarialML
Resilient AI: Advancing Robustness Against Adversarial Threats with D-ReLU

This article explores D-ReLU, an advanced modification of the ReLU activation function, designed to improve the robustness of AI models against adversarial attacks. By incorporating adaptive, learn…

BAYLOR AI
Optimizing Fairness and Robustness in Machine Learning Models. Keynote at #NeurIPS 2022 - #LXAI workshop. #TrustworthyAI #RobustAI #EthicalAI #SafeAI #ResponsibleAI
https://baylor.ai/?tag=ai-orthopraxy
AI Orthopraxy – BAYLOR AI

BAYLOR AI
🚗🔧 Our undergraduate research team at Baylor University is leveraging Vision Transformers (ViT) to detect patterns in online car part sales that could indicate illicit activity. Learn how we're making a difference in the fight against organized crime.
👉 Read more: https://buff.ly/3U5s6vz
#BaylorUniversity #AIResearch #MachineLearning #Cybersecurity #CarParts #Innovation
Uncovering Patterns in Car Parts – A Step Towards Combating a Cybercrime

The black market for stolen car parts is a significant problem, exacerbated by the rise of online marketplaces like Craigslist or OfferUp, where stolen goods are often sold under the radar. In resp…

BAYLOR AI

Dataset documentation fans, please check out "Data Statements: From Technical Concept to Community Practice" (McMillan-Major, Bender & Friedman 2023) -- reporting on how we took data statements v1 to v2 through learning with and from practitioners.

https://dl.acm.org/doi/10.1145/3594737

#nlp #DataDocumentation #ethnlp

Data Statements: From Technical Concept to Community Practice | ACM Journal on Responsible Computing

Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine ...

ACM Journal on Responsible Computing
Tomorrow, Monday, I'll give a keynote on Trustworthy AI from a fairness and adversarial robustness point of view at the #LXAI Workshop of the @NeuripsConf.
How does robustness relate to fairness?
How do you quantify robustness?
See you in New Orleans to chat!
#ResponsibleAI
Kitty Rivas is keeping company this #caturday morning as we ponder if we should write #iclr2023 rebuttals or call it quits.

Testing posting with #toot attaching an image.

It seems simple enough.

If you will be at #NeurIPS2022 don't be a stranger🙈 and come visit. Also come to this workshop, see, engage, and support #LatinX machine learning research. 🤓🐻
https://www.latinxinai.org/neurips-2022