

LFM2.5 is a family of AI models optimized for on-device deployment, representing Liquid AI's most capable release for edge AI applications. It builds on the LFM2 device-optimized architecture and enables access to private, fast, and always-on intelligence on any device.
The LFM2.5 family includes multiple specialized models: Text models offer uncompromised quality for high-performance on-device workflows, Audio models are 8x faster than predecessors running natively on constrained hardware, and Vision-Language models boost multi-image, multilingual vision understanding. The models are optimized for instruction following capabilities to serve as building blocks for on-device agentic AI.
The architecture features extended pretraining from 10T to 28T tokens and significantly scaled up post-training pipeline with reinforcement learning. The Audio model uses a native audio-language approach that accepts both speech and text as input/output modalities, eliminating information barriers between components and dramatically reducing end-to-end latency compared to pipelined approaches.
Benefits include private, fast, and always-on intelligence on any device, with applications across vehicles, mobile devices, IoT devices, and embedded systems. The models deliver best-in-class results across knowledge, instruction following, math, and tool use benchmarks while maintaining blazing inference speed.
The models target developers building applications for edge deployment scenarios, with support for popular inference frameworks including LEAP, llama.cpp, MLX, vLLM, and ONNX. Launch partnerships with AMD and Nexa AI provide optimized performance on NPUs for various hardware platforms.
admin
LFM2.5 targets developers building applications for edge AI deployment across various devices including vehicles, mobile devices, laptops, IoT devices, and embedded systems. The models are designed for those needing private, fast, and always-on intelligence on constrained hardware, with specific variants optimized for Japanese-language applications, audio processing, and vision-language tasks.