38 Followers
88 Following
67 Posts
I am assuming they did not have money issues. But caring for a 95-year-old, three dogs, and a large house is a full-time job. Women, of course, have been known to take on much heavier burdens in the name of love for a man. But still? Betsy, why didn't you have some help? You would be alive by now. You could have lived decades more.
And yet, Ms. Arakawa was doing it all on her own. The cooking, the medication schedule, the caring, the laundry, the cleaning, the bills, the three massive dogs. Why? Did she prefer to do it all alone? Or did her husband rely on her because he didn't want to have strangers coming to the house?

This probably means they employed no regular housekeeper, cook, butler, maid or other aid within the house.

This means one thing: the day-to-day care of a frail 95-year-old husband, three massive dogs, and a 10,000 sqft home was entrusted to Ms. Arakawa alone. They could easily have chosen to have at least part-time regular help.

Gene Hackman and Betsy Arakawa were found dead in their Santa Fe home many days after their last contact with anyone. One of their dogs also died. Some have criticized Hackman's children for not checking in on their father: but adult children have their own lives. No, what bugs me about this is something else.

Betsy Arakawa and Gene Hackman were not found earlier because they lived in isolation. Maintenance workers had last had contact with Arakawa two weeks earlier.

***Betsy Arakawa and Gene Hackman: A Feminist perspective***

If your heart aches for the 95-year-old man with Alzheimer's who was well-cared for throughout his life and had to live out his last week in confusion and solitude, why does your heart not break for the woman who took care of him for years, maybe decades, alone - and could have made it out alive but didn't?

Dear Time magazine, here's what I prepared for you.

@mishari "We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT"

https://arxiv.org/abs/2311.17035

Scalable Extraction of Training Data from (Production) Language Models

This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset. We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT. Existing techniques from the literature suffice to attack unaligned models; in order to attack the aligned ChatGPT, we develop a new divergence attack that causes the model to diverge from its chatbot-style generations and emit training data at a rate 150x higher than when behaving properly. Our methods show practical attacks can recover far more data than previously thought, and reveal that current alignment techniques do not eliminate memorization.

arXiv.org

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle

2/

Pluralistic: Humans are not perfectly vigilant (01 Apr 2024) – Pluralistic: Daily links from Cory Doctorow

It’s Women’s History Month and always a great opportunity to remind people that “When you're accustomed to privilege, equality feels like oppression."

#inspiringfifty2024italy is here!

Nominate now 👉 https://www.eventsforce.net/informatech/frontend/reg/thome.csp?pageID=19499&eventID=44 to celebrate the top 50 Italian women in tech and innovation.
Nomination Deadline: May 15th.

Our exceptional jury will recognize the top 50 on September 7th in Milan
More info: 👉 tmt.knect365.com/eql-her/italy/

InspiringFifty