A $196 fine-tuned 7B model outperforms OpenAI o3 on document extraction

https://arxiv.org/abs/2509.22906

#HackerNews #A #$196 #fine-tuned #7B #model #outperforms #OpenAI #o3 #on #document #extraction

fine-tuned-model #document-extraction #OpenAI #AI-research #machine-learning

Extract-0: A Specialized Language Model for Document Information Extraction

This paper presents Extract-0, a 7-billion parameter language model specifically optimized for document information extraction that achieves performance exceeding models with parameter counts several orders of magnitude larger. Through a novel combination of synthetic data generation, supervised fine-tuning with Low-Rank Adaptation (LoRA), and reinforcement learning via Group Relative Policy Optimization (GRPO), Extract-0 achieves a mean reward of 0.573 on a benchmark of 1,000 diverse document extraction tasks, outperforming GPT-4.1 (0.457), o3 (0.464), and GPT-4.1-2025 (0.459). The training methodology employs a memory-preserving synthetic data generation pipeline that produces 280,128 training examples from diverse document sources, followed by parameterefficient fine-tuning that modifies only 0.53% of model weights (40.4M out of 7.66B parameters). The reinforcement learning phase introduces a novel semantic similarity-based reward function that handles the inherent ambiguity in information extraction tasks. This research demonstrates that task-specific optimization can yield models that surpass general-purpose systems while requiring substantially fewer computational resource.

arXiv.org
Bitcoin Outperforms Traditional Financial Assets in H1 2023, according to #cryptorank - #Bitcoin #Outperforms #TraditionalFinancialAssets #Cryptorank
Bitcoin gained 84% in Q1 2023, significantly outperforming most traditional financial assets. https://social.wubits.io/share/649ce036513f891ea9db9b9e?rid=63dffbc796acc11510f0903b&utm_source=link