SPEAR-MM: Selective Parameter Evaluation and Restoration via Model Merging for Efficient Financial LLM Adaptation
Published 11 Nov 2025 · arXiv · Berkcan Kapusuzoglu
Overview
SPEAR-MM is a framework designed to enhance the adaptation of large language models (LLMs) to financial domains without losing general reasoning capabilities. It achieves this by selectively evaluating and restoring model parameters through a method called spherical interpolation merging.
Key Insights
- Retention of Capabilities: SPEAR-MM retains 91.2% of general reasoning capabilities, compared to 69.7% with standard continual pretraining.
- Evidence: Applied to LLaMA-3.1-8B for financial tasks.
- Verifiable: Yes, through application results.
- Domain Adaptation Gains: Maintains 94% of domain-specific adaptation gains.
- Evidence: Performance metrics from financial task applications.
- Verifiable: Yes, through application results.
- Cost Efficiency: Reduces computational costs by 90%.
- Evidence: Computational cost analysis.
- Verifiable: Yes, through cost analysis.
BFSI Relevance
- Why Relevant: The framework allows financial institutions to adapt LLMs efficiently, preserving essential reasoning capabilities while focusing on domain-specific tasks.
- Primary Sector: Financial Services
- Subsectors: Asset Management, Corporate Banking
- Actionable Implications:
- Evaluate the adoption of SPEAR-MM for cost-effective LLM adaptation.
- Consider integrating SPEAR-MM to enhance model efficiency and capability retention.
researcher peer-reviewed-paper global