My first few sheets of #TriX in the Graflex are nice and dense. Much denser off the bat compared to the Arista 100 I’ve finally finished working through. Two more exposures to go - have to wait for Sir-Leaks-A-Lot to dry out…

Two of the sheets look like I could’ve slid them further into the film holders. Will need to keep that in mind as I load some for a trip to the mountains this weekend. Planning on taking some #FP4 as well as various other films for other formats.

#AnalogPhotography #FilmPhotography #LargeFormat #4x5 #DevelopYourOwn

@elaterite @BobHorowitz Here it is, by special request: the woman who dashed out to sweep the pavement in front of her house, especially for my photo.

The village is Miranda del Castañar, an amazing time-warp place. There is a great campsite nearby, El Burrro Blanco; I absolutely love the place.

#BelieveInFilm #spain #espana #mamiya645 #fp4

Being alone doesn't mean being lonely. One airplane, one tree, one lake, one person …

Alleinsein bedeutet nicht, einsam zu sein. Ein Flugzeug, ein Baum, ein See, eine Person …


#Alone #Lonely #Allein #Einsam #BlackandWhite #blackandwhitephotography #Schwarzweiss #Photoennoiretblanc #BW #bnw #bnwphotography #BWPhotography #Monochrome #noAI #keineKI #Fotografie #Photographie #Fotografia #Fotografía #Foto #Photo #Photography #Fairphone #FP4 #Dortmund #Phoenixsee

Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

#LLM #FP4 #NVFP4 #MXFP4 #Precision #AMD #NVIDIA

https://hgpu.org/?p=30661

Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

Quantization addresses the high resource demand for large language models (LLMs) by alleviating memory pressure and bandwidth congestion and providing significantly scaled compute power with a tole…

hgpu.org

Practical FP4 Training for Large-Scale MoE Models on Hopper GPUs

#CUDA #LLM #Hopper #FP4 #Precision #Package

https://hgpu.org/?p=30640

Practical FP4 Training for Large-Scale MoE Models on Hopper GPUs

Training large-scale Mixture-of-Experts (MoE) models is bottlenecked by activation memory and expert-parallel communication, yet FP4 training remains impractical on Hopper-class GPUs without native…

hgpu.org