โ€œThe takeaway from Iran's tactics is that adversaries are likely to combine precision #weapons with #cheap, mass-produced #drones to overwhelm air defense systems so that the #precision weapons can get through.

Managing this threat means developing #LowCost defensive weapons, produced and used at #scale, to complement the interceptor missiles costing millions that are built to target aircraft and ballistic missiles.โ€

#Conflict / #economics <https://www.theregister.com/2026/03/23/nato_air_defenses/>

The drone swarm is coming, and NATO air defenses are too expensive to cope

: Ukraine's battlefield lessons show quantity and affordability now trump exquisite hardware

The Register
NumKong: 2'000 Mixed Precision Kernels For All ๐Ÿฆ

Over 2'000 SIMD kernels for mixed-precision BLAS-like numerics across 7 languages โ€” from Float6 to Float118, on RISC-V, Intel AMX, and Apple SME, in 5 MB.

Ash's Blog

@epistatacadam @ChrisMayLA6
On the first clause, yes (but it measures what it measures).
On the second, it is a basic principle of #measurement that #accuracy and #precision should each be known and reckoned with, but that large departures from perfection in either or both do not prevent useful measurements being made.

(When the fiction is reduced the instrument needs recalibrating.)

True 4-Bit Quantized Convolutional Neural Network Training on CPU: Achieving Full-Precision Parity

#Precision #CNN #Package

https://hgpu.org/?p=30680

True 4-Bit Quantized Convolutional Neural Network Training on CPU: Achieving Full-Precision Parity

Low-precision neural network training has emerged as a promising direction for reducing computational costs and democratizing access to deep learning research. However, existing 4-bit quantization โ€ฆ

hgpu.org
Iranian citizens are now Yelp reviewers for air strikes, apparently giving five-star ratings to the IDF's #precision โœˆ๏ธ๐Ÿคฏ. Who needs spy satellites when Tehran residents are dialing in coordinates like a pizza order? ๐Ÿคทโ€โ™‚๏ธ๐Ÿ“ž
https://www.iranintl.com/en/202603179685 #IranianCitizens #YelpReviews #IDF #AirStrikes #Warfare #Humor #HackerNews #ngated
Israeli official says tip from Tehran residents helped enable Larijani strike

Israeli official says tip from Tehran residents helped enable Larijani strike

Iran International
๐ŸŒŸ ๐ˆ๐ง๐ญ๐ซ๐จ๐๐ฎ๐œ๐ข๐ง๐  [Google Researchโ€™s] ๐†๐ซ๐จ๐ฎ๐ง๐๐ฌ๐จ๐ฎ๐ซ๐œ๐ž - ๐€๐ง ๐จ๐ฉ๐ž๐ง ๐ฌ๐จ๐ฎ๐ซ๐œ๐ž ๐๐š๐ญ๐š๐ฌ๐ž๐ญ ๐จ๐Ÿ ๐ก๐ข๐ฌ๐ญ๐จ๐ซ๐ข๐œ ๐Ÿ๐ฅ๐จ๐จ๐ ๐ž๐ฏ๐ž๐ง๐ญ๐ฌ ๐Ÿ๐ซ๐จ๐ฆ ๐ง๐ž๐ฐ๐ฌ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž๐ฌ.
--
https://doi.org/10.31223/X5RR2K / https://eartharxiv.org/repository/view/12083/ <-- shared paper
--
https://zenodo.org/records/18647054 <-- shared link to associated dataset
--
https://sites.research.google/gr/floodforecasting/ <-- shared link to Google Research flood forecasting effort entry page
--
#GoogleResearch #Google #Gemini #AI #ClimateTech #MachineLearning #DataScience #FloodForecasting #Sustainability #TechForGood #aggregation #curated #newsarticles #news #media #article #harvesting #reports #reporting #global #world #historic #naturalhazards #naturaldisaster #floods #flooding #flashflood #water #hydrology #extremeweather #climatechange #GDACS #Groundsource #GIS #spatial #mapping #spatialanalysis #spatiotemporal #geographic #openaccess #openscience #opendata #floodevents #LLM #gemini #largelanguagemodel #deeplearning #AI #precision #metrics #historicresource #model #modeling #forecasting #opensource

A quotation from Hyman Rickover

Nature is not as forgiving as Christ.

Hyman Rickover (1900-1986) Polish-American naval engineer, admiral [b. Chaim Gdala Rykower]
(Attributed)

More about this quote: wist.info/rickover-hyman/6585/

#quote #quotes #quotation #qotd #hymanrickover #engineering #forgiveness #JesusChrist #nature #precision #tolerance #marginoferror

Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

#LLM #FP4 #NVFP4 #MXFP4 #Precision #AMD #NVIDIA

https://hgpu.org/?p=30661

Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

Quantization addresses the high resource demand for large language models (LLMs) by alleviating memory pressure and bandwidth congestion and providing significantly scaled compute power with a toleโ€ฆ

hgpu.org

Practical FP4 Training for Large-Scale MoE Models on Hopper GPUs

#CUDA #LLM #Hopper #FP4 #Precision #Package

https://hgpu.org/?p=30640

Practical FP4 Training for Large-Scale MoE Models on Hopper GPUs

Training large-scale Mixture-of-Experts (MoE) models is bottlenecked by activation memory and expert-parallel communication, yet FP4 training remains impractical on Hopper-class GPUs without nativeโ€ฆ

hgpu.org