[[번역] 미래를 예측하는 법
하드웨어 성능 향상의 선형적 추세를 활용해 미래 기술의 등장 시점을 예측하는 방법론을 소개. 저자는 1994년부터 자신의 컴퓨터 성능 데이터를 기록해 연평균 증가율을 계산하고, 이를 통해 스트리밍 비디오, 음악 다운로드 서비스, 인터넷 속도 등 다양한 기술의 등장 시점을 정확하게 예측해왔다. 이 방법론은 선형적 하드웨어 추세와 비선형적 소프트웨어 발전을 결합해 예측하며, 크라우드소싱 효과를 고려한다.
[[번역] 미래를 예측하는 법
하드웨어 성능 향상의 선형적 추세를 활용해 미래 기술의 등장 시점을 예측하는 방법론을 소개. 저자는 1994년부터 자신의 컴퓨터 성능 데이터를 기록해 연평균 증가율을 계산하고, 이를 통해 스트리밍 비디오, 음악 다운로드 서비스, 인터넷 속도 등 다양한 기술의 등장 시점을 정확하게 예측해왔다. 이 방법론은 선형적 하드웨어 추세와 비선형적 소프트웨어 발전을 결합해 예측하며, 크라우드소싱 효과를 고려한다.
Future Telescope Finally Complete. Ignores National Destiny, Only Broadcasts Lucky Bag Stock Status in Real-Time.
The 'Space-Time Telescope,' built at the cost of national fortune, didn't show a glorious future—it showed a department store lucky bag frenzy. Instead of GDP forecasts, the Prime Minister watched 'Limited Edition Figure, 3 Left' flash on screen.
https://alt.andpaper.net/en/articles/20251128-future-scope-lucky-bag/
#space-time-telescope #luckybag #resale-market #futureprediction
Modern CPUs are actually pretty good at predicting the indirect branch inside an interpreter loop, _contra_ the conventional wisdom. We take a deep dive into the ITTAGE indirect branch prediction algorithm, which is capable of making those predictions, and draw some connections to some other interests of mine in the areas of fuzzing and reinforcement learning.
Outcome-Based Reinforcement Learning to Predict the Future
https://arxiv.org/abs/2505.17989
#HackerNews #OutcomeBasedReinforcementLearning #FuturePrediction #AIResearch #MachineLearning #ReinforcementLearning
Reinforcement learning with verifiable rewards (RLVR) has boosted math and coding in large language models, yet there has been little effort to extend RLVR into messier, real-world domains like forecasting. One sticking point is that outcome-based reinforcement learning for forecasting must learn from binary, delayed, and noisy rewards, a regime where standard fine-tuning is brittle. We show that outcome-only online RL on a 14B model can match frontier-scale accuracy and surpass it in calibration and hypothetical prediction market betting by adapting two leading algorithms, Group-Relative Policy Optimisation (GRPO) and ReMax, to the forecasting setting. Our adaptations remove per-question variance scaling in GRPO, apply baseline-subtracted advantages in ReMax, hydrate training with 100k temporally consistent synthetic questions, and introduce lightweight guard-rails that penalise gibberish, non-English responses and missing rationales, enabling a single stable pass over 110k events. Scaling ReMax to 110k questions and ensembling seven predictions yields a 14B model that matches frontier baseline o1 on accuracy on our holdout set (Brier = 0.193, p = 0.23) while beating it in calibration (ECE = 0.042, p < 0.001). A simple trading rule turns this calibration edge into \$127 of hypothetical profit versus \$92 for o1 (p = 0.037). This demonstrates that refined RLVR methods can convert small-scale LLMs into potentially economically valuable forecasting tools, with implications for scaling this to larger models.
We asked different AI engines a news-related question that required predicting the future and calling upon their RAG capabilities #TTMO #AI #ChatGPT #CoPilot #mistral #claude #RAG #Predictions #technology #tech #LLM #FuturePrediction #MachineLearning
https://medium.com/@chribonn/ai-got-it-wrong-news-7d9863ab39b5
AI Got It Wrong - News
We asked different AI engines a news-related question that required predicting the future with one news item evaluating their RAG capabilities. It's fascinating to see how each AI approaches the challenge! Who do you think gives the best response? Let us know in the comments! 👇
#TTMO #AI #ChatGPT #DeepSeek #AIStudio #CoPilot #mistral #claude #RAG #Predictions #technology #tech #LLM #FuturePrediction #MachineLearning