https://screwlisp.small-web.org/conditions/symbolic-d-l/
#Symbolic #deepLearning #inferencing with #commonLisp #conditions

The #DL from before, but it works via a mixture of condition handlers and restarts.

This turned out to be condition example boilerplate, but it was interesting to me personally, at least!

Not sure about this construction I used (paraphrasing):

(prog ((c nil))
start
(restart-case
(if c
(signal c))
(resignal (condition) (setq c condition) (go start))))

#programming #ai

Epoch AI’s latest report reveals how inference costs are dropping, frontier AI is becoming accessible on consumer-level hardware, and compute infrastructure is expanding rapidly — fueling broader adoption and demand for AI GPUs, servers, and efficient compute setups. These shifts are reshaping the AI hardware market... Read more: https://www.buysellram.com/blog/what-epoch-ais-2025-data-insights-mean-for-the-ai-hardware-market/

#AIHardware #GPU #AIMarketTrends #AICompute #AI #DataCenter #Inferencing #TechInsights #SecondaryMarket #Technology

Bạn đang tìm hiểu về phần cứng AI? Có sự khác biệt lớn giữa xây dựng máy cho suy luận (inferencing) và đào tạo (training) mô hình. Một người dùng Reddit đang hỏi về cấu hình máy tối ưu chỉ cho suy luận, đặc biệt là cho DeepSeek OCR, để tránh dùng API cloud. Bạn có gợi ý nào không?

#AI #Inferencing #Hardware #DeepLearning #LocalLLaMA #CấuHìnhAI #SuyLuậnAI #PhầnCứngMáyTính #AIoT

https://www.reddit.com/r/LocalLLaMA/comments/1ohfmoc/what_is_the_best_build_for_inferencing/

"Hôm nay mình muốn cùng tìm hiểu về **inferencing LLM** – tập trung vào thực tiễn như hiệu quả, quantization, оптимізация và pipeline triển khai. Nếu có tài liệu, paper, framework open-source hoặc nghiên cứu thực tế nào giúp đỡ mình, vui lòng chia sẻ! #AI #LLM #Inferencing #QLTN #TinTứcTech # inscription possédée"

https://www.reddit.com/r/LocalLLaMA/comments/1o8wi46/exploring_llm_inferencing_looking_for_solid/

Check out the latest on Docker Model Runner! And we would love your contributions. Star, fork, and contribute to the project. Let's build the future of AI together! #Docker #OpenSource #AI #LLM #inferencing #llamacpp

GitHub: https://github.com/docker/model-runner
Blog Post: https://www.linkedin.com/pulse/top-docker-model-runner-features-developers-love-whats-next-docker-1axwf/?trackingId=mB2AhjTlqJsroeyJ0CffVg%3D%3D

GitHub - docker/model-runner: Docker Model Runner

Docker Model Runner. Contribute to docker/model-runner development by creating an account on GitHub.

GitHub
Amazon, AMD, and others are stepping up with credible alternatives to Nvidia's AI chips, particularly for inferencing—a key growth area in AI. 💡🤖 #AI #Nvidia #Amazon #AMD #TechInnovation #ArtificialIntelligence #AIChips #MachineLearning #TechTrends #Inferencing #FutureOfAI

Microsoft's #AI biz on track to $10B annual run rate next quarter

Microsoft turning away #AI #training workloads – #inferencing makes better money

Azure's acceleration continues, but so do costs

https://www.theregister.com/2024/10/31/microsoft_q1_fy_2025/

Microsoft turning away AI training workloads – inferencing makes better money

Azure's acceleration continues, but so do costs

The Register
ChatGPT gibt Quatsch aus: Probleme beim Inferencing | heise online
https://heise.de/-9636964 #Chatbot #ChatGPT #Inferencing
ChatGPT gibt Quatsch aus: Probleme beim Inferencing

Kauderwelsch bei ChatGPT: Der Chatbot hatte Probleme sinnvolle Antworten zu geben – deutlich mehr als übliche Halluzinationen.

heise online

@drahardja I wouldn’t be surprised if VR is the tech that breaks the camels back and necessitates #Photonics, or #OpticalChips.

You can parallelize computing a lot more with #OpticalLogic , and early #OpticalProcessors are rolling off manufacturing floors and being used for #ML #Inferencing and #GraphicsProcessing.

With the right #Technology you could with zero latency composition #MR layers using light instead of electricity.

heise+ | Nvidias A100 mit Ampere-Architektur: Der KI-Beschleuniger im Detail

Ein genauerer Blick auf die Ampere-Architektur zeigt, was an Nvidias Versprechen von 20-facher Leistung im Vergleich zum Vorgänger dran ist.
Nvidias A100 mit Ampere-Architektur: Der KI-Beschleuniger im Detail
#A100 #Ampere-Architektur #Chip #High-Performance-Computing #Inferencing #KünstlicheIntelligenz #MachineLearning #Nvidia #NvidiaAmpere #Rechenzentrum
Nvidias A100 mit Ampere-Architektur: Der KI-Beschleuniger im Detail

Ein genauerer Blick auf die Ampere-Architektur zeigt, was an Nvidias Versprechen von 20-facher Leistung im Vergleich zum Vorgänger dran ist.