OCO 2 and 3 đź›° at risk. And with it, also our food security and more...
"[...] the thing that gets us to the thing"
OCO 2 and 3 đź›° at risk. And with it, also our food security and more...
"After water, sand is the second most used material in the world. Each year, approximately 40-50 billion metric tons of sand are consumed worldwide. This accounts for 79% of all aggregates extracted and traded, making sand the literal foundation for global human infrastructure. Sand plays a vital role in the production of glass, steel, and concrete.[...]"
https://2025.trienaldelisboa.com/en/essays/granular-power-the-gritty-politics-of-sand
"[...] introduced three envisioned use cases for a space datacenter using GEO satellites along with two key technologies that enable these use cases and contribute to reducing communication volume with the ground and improving the real-time performance of data analysis: event-driven AI inference for SAR data and lightweight change detection technology."
#NTT #space #AI
https://www.ntt-review.jp/archive/ntttechnical.php?contents=ntr202506fa2_s.html
《It will also cover “realistic, digitally generated imitations” of an artist’s performance without consent. Violation of the proposed rules could result in compensation for those affected.
The government said the new rules would not affect parodies and satire, which would still be permitted.》
"[...] our initial evaluations of NHC’s observed hurricane data, on test years 2023 and 2024, in the North Atlantic and East Pacific basins, showed that our model’s 5-day cyclone track prediction is, on average, 140 km closer to the true cyclone location than ENS — the leading global physics-based ensemble model from ECMWF. This is comparable to the accuracy of ENS’s 3.5-day predictions — a 1.5-day improvement that has typically taken over a decade to achieve."
https://deepmind.google/discover/blog/weather-lab-cyclone-predictions-with-ai/
"[...] it’s actually cheaper to build new renewables capacity than to keep existing coal plants running in the US, according to a 2023 report from Energy Innovation,[...]"
https://www.technologyreview.com/2025/06/19/1119027/us-coal-power-struggle/
"[...] LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
“Signal’s position on this is very clear –- we will not walk-back, adulterate, or otherwise perturb the robust privacy and security guarantees that people depend on,” Whittaker said. “Whether that perturbation or backdoor is called client-side scanning, or the stripping of the encryption protections from one or another features similar to what Apple was pushed into doing in the U.K.”
《The BBC gave OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI content from the BBC website then asked them questions about the news.
It said the resulting answers contained "significant inaccuracies" and distortions.》