SK Hynix's memory semiconductor sales to Nvidia reached 23 trillion won in 2025, more than doubling from 10.9 trillion won the previous year, accounting for 24% of the chipmaker's total revenue as the AI accelerator market continues its rapid expansion.
#YonhapInfomax #SKHynix #Nvidia #MemorySemiconductors #HighBandwidthMemory #AIAccelerator #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=110410
SK Hynix's Nvidia Sales Hit 23 Trillion Won Last Year - 2.1 Times Year-Over-Year

SK Hynix's memory semiconductor sales to Nvidia reached 23 trillion won in 2025, more than doubling from 10.9 trillion won the previous year, accounting for 24% of the chipmaker's total revenue as the AI accelerator market continues its rapid expansion.

Yonhap Infomax
#Google and #Accel’s #AIaccelerator programme, Atoms, selected five #startups for its latest cohort, none of which were “AI wrappers” built on existing models. The programme, which provides funding and resources to early-stage #AIstartups, received a high volume of applications, with a focus on #enterprise applications. https://techcrunch.com/2026/03/15/google-and-accel-cut-through-wrappers-in-4000-ai-startup-pitches-to-pick-five-tied-to-india/?eicker.news #tech #media #news
Google, Accel India accelerator chooses 5 startups and none are 'AI wrappers' | TechCrunch

Google and Accel say about 70% of AI startup pitches tied to India were "wrappers" as they reviewed more than 4,000 applications for their Atoms cohort.

TechCrunch

Min Choi (@minchoi)

Nvidia가 차세대 가속기 'Vera Rubin'을 공개(출시: 2026년 하반기). 발표 내용은 Blackwell 대비 전력 효율 10배, 추론 토큰 비용 10배 절감, 동일한 MoE 모델 학습 시 필요한 GPU 수 4배 감소 등으로 에너지·추론 비용을 대폭 개선했다고 주장.

https://x.com/minchoi/status/2026800952263496021

#nvidia #verarubin #hardware #aiaccelerator

Min Choi (@minchoi) on X

Nvidia just revealed Vera Rubin. Ships H2 2026. The numbers are wild: → 10x more performance per watt vs Blackwell → 10x cheaper inference token cost → 4x fewer GPUs to train the same MoE model Energy was the biggest bottleneck in AI. Nvidia just made it 10x cheaper.

X (formerly Twitter)
SK Group Chairman Chey Tae-won and NVIDIA CEO Jensen Huang met at 99 Chicken in Silicon Valley to discuss HBM supply and AI collaboration, signaling deepening ties between South Korea's memory chipmakers and global tech leaders amid the rollout of next-generation AI accelerators.
#YonhapInfomax #SKGroup #NVIDIA #HBM #AIAccelerator #SKHynix #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=104620
Chey Tae-won, Jensen Huang Hold 'Chimaek' Meeting at 99 Chicken in US—HBM, AI Collaboration Discussed

SK Group Chairman Chey Tae-won and NVIDIA CEO Jensen Huang met at 99 Chicken in Silicon Valley to discuss HBM supply and AI collaboration, signaling deepening ties between South Korea's memory chipmakers and global tech leaders amid the rollout of next-generation AI accelerators.

Yonhap Infomax

Quick little follow-up analysis on broader #cloudcomputing market implications for the Microsoft #Maia200 news this week, as #AIinference continues to be a hot topic in #AIinfrastructure: Could it free up #GPU capacity for customers in #Azure? Offer a cheaper alternative to #Nvidia? Even chip away (see what I did there?) at Nvidia's overall market dominance?

Michael Leone, Naveen Chhabra and Steven Dickens share their takes:

https://www.techtarget.com/searchcloudcomputing/news/366637986/Microsoft-Maia-200-AI-chip-could-boost-cloud-GPU-supply

#AIaccelerator #TPU #Trainium #cloud #AIchip

Microsoft Maia 200 AI chip could boost cloud GPU supply

Industry watchers predict ancillary effects for enterprise cloud buyers from Microsoft's AI accelerator launch this week, from GPU availability to Nvidia disruption.

TechTarget

techAU (@techAU)

Microsoft가 Maia 200을 공개했습니다. Maia 200은 대규모 AI 추론 수요를 처리하도록 설계된 맞춤형 AI 실리콘(클라우드/서버용 AI 가속기)로, 이전 세대인 Maia 100의 후속 제품이라고 소개되었습니다. AI 하드웨어 경쟁에서 중요한 신제품 발표입니다.

https://x.com/techAU/status/2015884965897207897

#microsoft #maia200 #aiaccelerator #silicon

techAU (@techAU) on X

Microsoft Unveils Maia 200: The Next Generation of Custom AI Silicon. Microsoft has officially announced the Maia 200, its latest custom-designed AI accelerator specifically engineered to handle the massive demands of AI inference. As the successor to the original Maia 100,

X (formerly Twitter)

The global GPU market continues to tighten as AI training, inference workloads, and data center upgrades accelerate. High-end graphics cards are seeing sustained demand from AI startups, cloud providers, and enterprise buyers—making now a strong time to sell GPUs you’re no longer using.

If your business has surplus or decommissioned graphics cards (NVIDIA, AMD, data center or workstation GPUs), you can request a fast, professional quote from BuySellRam.com.
Check this page:
https://www.buysellram.com/sell-graphics-card-gpu/

#SellGPU #GraphicsCard #AIHardware #DataCenter #ITAD #Ewaste #CircularEconomy #BuySellRam #Nvidia #AMD #AIAccelerator #TPU #VideoCard #TechRecycling #tech

GPUs/Graphics Cards

Looking to sell your graphics cards? BuySellRam offers top-dollar payouts for new and used GPUs in bulk. Instant quotes, free shipping, and fast payment for IT departments, data centers, and individual sellers.

BuySellRam
The new Raspberry Pi AI HAT+ 2 is a $130 add-on board with a Halio 10H chip and 8GB of LPDDR4X RAM for running LLMs without using the Pi's built-in hardware. But the Pi itself can actually run those models faster (due to higher power limits). https://www.jeffgeerling.com/blog/2026/raspberry-pi-ai-hat-2/ #RaspberryPi #AIAccelerator #RaspberryPiAiHatPlus2
Raspberry Pi's new AI HAT adds 8GB of RAM for local LLMs

Today Raspberry Pi launched their new $130 AI HAT+ 2 which includes a Hailo 10H and 8 GB of LPDDR4X RAM. With that, the Hailo 10H is capable of running LLMs entirely standalone, freeing the Pi's CPU and system RAM for other tasks. The chip runs at a maximum of 3W, with 40 TOPS of INT8 NPU inference performance in addition to the equivalent 26 TOPS INT4 machine vision performance on the earlier AI HAT with Hailo 8.

Jeff Geerling

Razer trình diễn thùng “AI accelerator” tích hợp bộ xử lý Wormhole n150 của Tenstorrent tại CES. Thiết bị dự kiến là bo mạch PCIe 12 GB, giá khoảng 1.000 USD, nhưng chưa có đánh giá hiệu năng thực tế. #Razer #AI #Tenstorrent #CES #AIAccelerator #TríTuệNhânTạo #CôngNghệ

https://www.reddit.com/r/LocalLLaMA/comments/1q617ug/razer_is_demonstrating_a_ai_accelerator_box_with/