Learn how Hypernova-60B v2602 adds robust tool-calling while preserving frontier-level intelligence at half the size — delivering agent-ready AI on a single GPU.
This study conducted by Sopra Steria evaluates the performance of a compression method, called CompactifAI, developed by Multiverse Computing, applied to the large language model Llama 3.1 8B. The evaluation focused on model efficiency (in terms of energy consumption) and accuracy using respectively the frameworks Codecarbon and Ragas. A comparison was performed between the model compressed with CompactifAI and its full-size version. The findings reveal that the compressed model using CompactifAI not only significantly reduced the computational resources but also maintained the model accuracy, making the model more efficient, scalable and cost-effective.