(1/4) I've been thinking about the learning opportunities presented by ChatGPT. In addition to how the technology raises questions about how to teach and assess writing, I think ChatGPT offers educators and students a chance to develop their technoskepticism by analyzing the sociotechnical systems and ideologies that support ChatGPT.

(2/4) The #CivicsOfTechnology project has a set of questions on its curriculum page for cultivating technoskepticism. The questions are: 

1. What does society give up for the benefits of the technology
2. Who is harmed and who benefits from the technology?
3. What does the technology need?
4. What are the unintended or unexpected changes caused by the technology?
5. Why is it difficult to imagine the world without the technology?

Visit the entire curriculum collection: https://civicsoftechnology.squarespace.com/curriculum

Curriculum — Civics of Technology

Our 5 Critical Questions about Technology and Inquiry Design Model Lessons can be used to critically inquire into the collateral, unintended, & disproportionate effects of technology on our lives.

Civics of Technology

(3/4) I think educators can then pair the questions with a closer look at harmful ideologies underpinning OpenAI, effective altruism, and longtermism. Timnit Gebru's recent article in Wired could be coupled with one or two episodes from the podcast Tech Won't Save Us.

From Timnit Gebru: https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/

From Tech Won't Save Us: https://techwontsave.us/episode/138_dont_fall_for_the_longtermism_sales_pitch_w_emile_torres

Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

This philosophy—supported by tech figures like Sam Bankman-Fried—fuels the AI research agenda, creating a harmful system in the name of saving humanity

WIRED
(4/4) Thank you for attending my public thinking-this-through thread, a process that's especially helpful since yours truly is continuing to brainstorm topics for a course on school, technology, and power. To that end, how else might y'all engage youth (say ages 15-22) on issues related to EA, longtermism, OpenAI, especially insofar as they intersect with youths' experience at school?