➤ 揭開大型語言模型在金融領域的輸出不確定性,並提供驗證與緩解之道
✤ https://arxiv.org/abs/2511.07585
金融機構在應用大型語言模型(LLM)進行對帳、法規報告及客戶溝通時,面臨了輸出不確定性(輸出飄移)的問題,這嚴重影響了稽覈的可信度。本研究量化了五種不同模型架構(70億至1200億參數)在受監管金融任務上的輸出飄移現象,發現較小的模型(如70億參數的Granite-3-8B和Qwen2.5-7B)在溫度參數 T=0.0 時能達到100%的輸出一致性,而1200億參數的GPT-OSS-120B模型僅有12.5%的一致性,此結果顛覆了傳統上認為大型模型在生產部署上總是更優的觀念。研究團隊開發了一個結合貪婪解碼 (T=0.0)、固定種子和基於SEC 10-K結構的檢索排序的決定性測試工具,並針對RAG、JSON和SQL輸出的特定任務進行了不變性檢查,採用金融類別的重大性門檻(±5%)和SEC引用驗證。此外,還建立了一個三層次的模型分類系統以支援風險適當的
#機器學習 #金融科技 #大型語言模型

LLM Output Drift: Cross-Provider Validation & Mitigation for Financial Workflows
Financial institutions deploy Large Language Models (LLMs) for reconciliations, regulatory reporting, and client communications, but nondeterministic outputs (output drift) undermine auditability and trust. We quantify drift across five model architectures (7B-120B parameters) on regulated financial tasks, revealing a stark inverse relationship: smaller models (Granite-3-8B, Qwen2.5-7B) achieve 100% output consistency at T=0.0, while GPT-OSS-120B exhibits only 12.5% consistency (95% CI: 3.5-36.0%) regardless of configuration (p<0.0001, Fisher's exact test). This finding challenges conventional assumptions that larger models are universally superior for production deployment. Our contributions include: (i) a finance-calibrated deterministic test harness combining greedy decoding (T=0.0), fixed seeds, and SEC 10-K structure-aware retrieval ordering; (ii) task-specific invariant checking for RAG, JSON, and SQL outputs using finance-calibrated materiality thresholds (plus or minus 5%) and SEC citation validation; (iii) a three-tier model classification system enabling risk-appropriate deployment decisions; and (iv) an audit-ready attestation system with dual-provider validation. We evaluated five models (Qwen2.5-7B via Ollama, Granite-3-8B via IBM watsonx.ai, Llama-3.3-70B, Mistral-Medium-2505, and GPT-OSS-120B) across three regulated financial tasks. Across 480 runs (n=16 per condition), structured tasks (SQL) remain stable even at T=0.2, while RAG tasks show drift (25-75%), revealing task-dependent sensitivity. Cross-provider validation confirms deterministic behavior transfers between local and cloud deployments. We map our framework to Financial Stability Board (FSB), Bank for International Settlements (BIS), and Commodity Futures Trading Commission (CFTC) requirements, demonstrating practical pathways for compliance-ready AI deployments.