#OpenAI GPT-5.3-Codex-Spark Now Running at 1K Tokens Per Second on BIG #Cerebras Chips
OpenAI showed a “build a snake game” task on GPT-5.3-Codex-Spark and GPT-5.3-Codex at medium. Both completed task, but the Cerebras-backed Spark ran in 9 seconds, compared with nearly 43 seconds on the non-Spark model. If you want to see the video of the side-by-side, here is a link. The Spark model is said to be higher quality than the GPT-5.1-Codex as well as much faster.
https://www.servethehome.com/openai-gpt-5-3-codex-spark-now-running-at-1k-tokens-per-secondon-big-cerebras-chips/ #WaferScale
OpenAI GPT-5.3-Codex-Spark Now Running at 1K Tokens Per Second on BIG Cerebras Chips

OpenAI GPT-5.3-Codex-Spark is now running on huge Cerebras WSE-3 chips at over 1000 tokens per second for super-fast inference

ServeTheHome

OpenAI just upgraded its code‑gen engine with Cerebras’ wafer‑scale chips, boosting inference speed up to 15×. The new GPT‑5.3‑Codex‑Spark model promises faster, more efficient developer tools and could reshape AI hardware roadmaps. Curious how this impacts open‑source tooling? Dive into the details. #OpenAI #Cerebras #GPT53CodexSpark #WaferScale

🔗 https://aidailypost.com/news/openai-deploys-cerebras-chips-15x-faster-code-generation

Andrew Feldman (@andrewdfeldman)

OpenAI와 Cerebras가 다년 계약을 체결해 OpenAI 고객에 서비스를 제공하기 위해 Cerebras 웨이퍼-스케일 시스템을 총 750메가와트 규모로 배치합니다. 배치는 2026년 초 시작되며 완전 배치 시 대규모 고속 AI 인프라가 될 예정이라는 발표입니다.

https://x.com/andrewdfeldman/status/2011542267774021869

#openai #cerebras #waferscale #aiinfrastructure #deployment

Andrew Feldman (@andrewdfeldman) on X

@OpenAI and @Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers. This has been a decade in the making. Deployment begins in early 2026, and when fully rolled out, it will be the largest high-speed AI

X (formerly Twitter)

Cerebras ra mắt chip AI CS-3 kích thước wafer, tích hợp hàng triệu lõi với công suất 25kW (WSE-3). Chip đạt hiệu suất suy luận khổng lồ 125 PFLOPS, hứa hẹn tạo nên "sóng thần" trong lĩnh vực điện toán hiệu năng cao (HPC).

#AI #ChipAI #HPC #Cerebras #Technology #WaferScale

https://www.reddit.com/r/singularity/comments/1p8w2el/cerebras_cs3_waferscale_millioncore_ai_chip_25kw/

The Top 10 #Semiconductor Stories of 2024 Trillion-transistor GPUs, steel-slicing laser chips, particle accelerators, and more #10 seems interesting: Expect a Wave of #Waferscale Computers spectrum.ieee.org/top-semicond... #HPC via @ieeespectrum.bsky.social

The Top 10 Semiconductor Stori...
Bluesky

Bluesky Social
With “Big Chip,” #China Lays Out Aspirations For #Waferscale
China planning 1,600-core chips that use an entire wafer — similar to American company #Cerebras 'wafer-scale' designs. Chinese Academy of Sciences introduced an advanced 256-core multi-chiplet compute complex called #Zhejiang #BigChip. The multi-chiplet design consists of 16 chiplets containing 16 #RISCV cores each and connected to each other in a conventional #SMP manner using a network-on-chip.
https://www.nextplatform.com/2024/01/03/with-big-chip-china-lays-out-aspirations-for-waferscale/
With “Big Chip,” China Lays Out Aspirations For Waferscale

The end of Moore’s Law – the real Moore’s Law where transistors get cheaper and faster with every process shrink – is making chip makers crazy. And there

The Next Platform

Cerebras has unveiled its Condor Galaxy supercomputer, a cluster that, when complete, will span 9 sites capable of 36 exaFLOPS of FP16 performance.

The first phase of CG-1 is reportedly up to 2 exaflops already.

https://www.theregister.com/2023/07/20/cerebras_condor_galaxy_supercomputer/

#HPC #AI #Supercomputer #chips #waferscale

Cerebras's Condor Galaxy AI supercomputer takes flight carrying 36 exaFLOPS

Nine-site system built for UAE's G42, but there'll be plenty to spare

The Register

Newsflash: not at #PhotonicsWest but churning out new and nicely diced #waferscale #optics to keep the party rolling.

#Fraunhofer IOF, #Photonics