➤ 借鑒 Transformer 嵌入,開創可擴展量子 SVM 新紀元
✤ https://arxiv.org/abs/2508.00024
本研究提出一種創新的量子-古典機器學習方法,透過結合視覺 Transformer (ViT) 的預訓練嵌入與類別平衡的 k-means 降維技術,解決了現有量子支援向量機 (QSVM) 在處理高維量子態及硬體限制時的擴展性挑戰。實驗結果顯示,ViT 嵌入能夠顯著提升量子優勢,在 Fashion-MNIST 和 MNIST 資料集上分別帶來高達 8.02% 和 4.42% 的準確度提升,而傳統卷積神經網路 (CNN) 特徵在此方面則表現不佳。研究進一步利用 16 個量子位元的張量網路模擬(透過 cuTensorNet 實現),提供了首個系統性的證據,表明量子核優勢的表現與嵌入選擇有著關鍵的依賴關係,並揭示了 Transformer 的注意力機制與量子特徵空間之間存在重要
#量子計算 #機器學習 #SVM #嵌入 #Transformer #視覺 Transformer

Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning
Quantum Support Vector Machines face scalability challenges due to high-dimensional quantum states and hardware limitations. We propose an embedding-aware quantum-classical pipeline combining class-balanced k-means distillation with pretrained Vision Transformer embeddings. Our key finding: ViT embeddings uniquely enable quantum advantage, achieving up to 8.02% accuracy improvements over classical SVMs on Fashion-MNIST and 4.42% on MNIST, while CNN features show performance degradation. Using 16-qubit tensor network simulation via cuTensorNet, we provide the first systematic evidence that quantum kernel advantage depends critically on embedding choice, revealing fundamental synergy between transformer attention and quantum feature spaces. This provides a practical pathway for scalable quantum machine learning that leverages modern neural architectures.