The rising tide of open-source AI: Proprietary LLMs vs open-source models
April 3, 2025 / Brett Barton
Short on time? Read the key takeaways:
- Open-source AI models now match proprietary systems at a fraction of the cost, challenging traditional assumptions about AI development and accessibility.
- Innovations like Group Relative Policy Optimization have reduced dependency on human labelers, cutting both time and costs required to produce high-quality models.
- These advancements enable smaller organizations to enter the AI space, encourage specialized model development and open new possibilities for sensitive data applications.
- Organizations should examine the new economics of AI, review their strategies and invest in domain-specific models to capitalize on these opportunities.
The fields of generative AI and large language models (LLMs) have advanced rapidly in recent months. Yet recent improvements, like creating an AI application that rivals leading systems, while spending a fraction of the cost, have challenged previous assumptions about AI development.
Open-source innovations show how technical improvements can democratize AI development, lower cost barriers and drive important industry developments. Their success offers insights for companies seeking to expand their AI capabilities in 2025.
Key advancements changing AI development
Two key developments are changing how companies build and deploy AI systems, disrupting traditional approaches:
Effective, low-parameter open-source models are now widely accessible, nearly matching the performance of proprietary systems. This advancement enables organizations to deploy LLMs on premises at a fraction of the cost, creating opportunities to develop highly efficient, domain-specific models that were previously prohibitively expensive.
Innovations like Group Relative Policy Optimization (GRPO) have significantly reduced the dependency on human labelers (people who manually review, categorize or provide feedback on AI outputs to improve model performance).1 By leveraging reinforcement learning improvements through computational validation of queries—such as in code generation and mathematical modeling—GRPO streamlines the process of refining models. This approach dramatically cuts both the time and costs required to produce high-quality models for code generation and reasoning.
What this means for your industry
The ripple effects of these advancements will likely redefine the AI landscape and influence how organizations adopt AI solutions:
Leveling the playing field in AI development: Historically, the high cost of training foundational models created a “compute moat” favoring players with large capital. Now, smaller players can get started with AI without the prohibitive upfront costs and help to build a more competitive ecosystem.
Proliferation of specialized models: With reduced costs and smaller parameter counts, organizations across industries—from healthcare to education—can develop AI tailored to their unique needs. This shift from "one-size-fits-all" solutions empowers companies to focus on specialized use cases with higher ROI.
Re-evaluating agentic AI’s ROI: Lower costs will drive organizations to reconsider agentic AI use cases previously dismissed as too expensive. Areas like customer service, predictive analytics and supply chain optimization will see renewed attention and investment.
Increased compute demand: While these innovations reduce costs, they also make AI adoption more accessible, leading to higher overall consumption of compute resources. Known as the Jevons Paradox2, this phenomenon will sustain demand for GPUs and compute infrastructure as AI initiatives scale.
Application of AI to sensitive data: Many AI use cases involving highly sensitive data were infeasible since access to a high-quality model used to mean that data had to be shared with a proprietary LLM provider. However, since we now have open-source models that rival or in some cases outclass contemporary proprietary models, these use cases are now ripe to pursue.
A call to action for organizations
The implications of these innovations are profound. To remain competitive, organizations must:
- Embrace the new economics of AI: Understand how reduced costs open new opportunities for innovation.
- Reassess AI strategies: Reevaluate use cases previously sidelined due to cost and/or time and re-evaluate how the latest advancements can unlock new value streams.
- Invest in domain-specific models: Develop solutions tailored to unique challenges without incurring the overhead of massive, generalized LLMs.
Looking ahead: The future of AI
Recent advancements in AI foreshadow a future where its benefits are more equitably distributed. Lower financial and technical barriers empower businesses of all sizes to harness the transformative capabilities of AI. Organizations that adapt quickly will have a higher potential relevance along with the opportunity to thrive in this new era of innovation.
American computer scientist Alan Kay is credited with saying, “The best way to predict the future is to invent it.”3 With these groundbreaking advancements, the future of AI has arrived. The time to seize its opportunities is now.
Ready to explore what's possible with AI? Connect with our Unisys team today to turn these innovations into your competitive advantage.
1 AWS | Community | Deep dive into Group Relative Policy Optimization (GRPO)
2 https://philosophyterms.com/jevons-paradox/
3 TOP 25 QUOTES BY ALAN KAY (of 73) | A-Z Quotes