Google is investing $1B to power the future of AI education across U.S. colleges. With hands-on tools, cloud resources, and Gemini-powered learning, students and educators gain real-world AI skills aligned with today’s workforce.

https://graycyan.ai/google-ai-training/

#AITraining #GoogleForEducation #GeminiAI #HigherEd #AISkills #FutureOfWork #WorkforceDevelopment #AIInnovation #HonestAI

Got introspective while mowing the lawn today, and machines can be great but the #ai boom is so skewed. As a demo, I still love this 2016 paper by Ribeirio, Singh, and Guesterin.
https://arxiv.org/abs/1602.04938

If that doesn't work, I just get a decision tree out and have a large group time themselves in a robust session of 20 questions... it's the counting tasks we should be handing off to the machines, not the rest of it. #artificialintelligence #machinelearning #computing #academichonesty #honestai

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

arXiv.org
#Bard says #Google is a monopoly and the government should break it up 😂
#honestAI https://futurism.com/the-byte/googles-new-ai-google-monopoly-government
Google's New AI Says Google Is a Monopoly and the Government Should Break It Up

Apparently, in the Justice Department's legal battles against Google over monopoly concerns, Google's AI-powered Bard chatbot is siding with the government.

Futurism