Wes Roth (@WesRothMoney)
Artificial Analysis가 범용 AI 모델 지능을 측정하는 지표인 Intelligence Index v4.0을 발표했다. 이번 업데이트는 상위 모델 점수의 포화현상을 줄여 최고 점수를 이전 v3.0의 73에서 50으로 조정했으며, 평가 항목에 새로 3가지 항목을 도입하는 등 지표 체계를 갱신함으로써 모델 비교·평가의 정밀도를 높였다고 설명함.
https://x.com/WesRothMoney/status/2008984482020274482
#intelligenceindex #benchmark #aimetrics #artificialanalysis #evaluation

Wes Roth (@WesRothMoney) on X
Artificial Analysis has released Intelligence Index v4.0, their most advanced and rigorous synthesis metric yet for measuring generalist AI model intelligence.
The updated index:
🔹Reduces score saturation top models now score 50 (down from 73 in v3.0)
🔹Introduces 3 new
X (formerly Twitter)I'm continuing research in LLM evals for apps and single prompts which IMO is one of the most challenging fields in machine learning right now. Im excited to learn more about Arize Phoenix and their "open-source observability library".
#LLM #MachineLearning #AIEvaluation #Evaluation #AIMetrics #MLOps #AIInsights #ArizeAI #Phoenix #AIEthics #AITransparency #ResponsibleAI
Im linking a very informative video of theirs that got me interested in what they made:
https://www.youtube.com/watch?v=9Ay0WcjrdGE

LLM Evals and LLM as a Judge: Fundamentals
YouTube"Re-evaluating GPT-4’s bar exam performance" (open access)
Maybe the original claims of performance on the bar exam were not what they seemed.
https://link.springer.com/article/10.1007/s10506-024-09396-9
#openai #llm #llms #ai #aimetrics

Re-evaluating GPT-4’s bar exam performance - Artificial Intelligence and Law
Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and $$\sim$$ ∼ 48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be $$\sim$$ ∼ 62nd percentile, including $$\sim$$ ∼ 42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to $$\sim$$ ∼ 48th percentile overall, and $$\sim$$ ∼ 15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.
SpringerLink