More than ever, enterprises are grappling with a hybrid IT estate spread across public cloud, on-premises, and edge computing. This poses significant challenges in terms of standardizing security, delivery, and operations across disparate environments.
Against this ever-changing backdrop, what are they key trends to look out for in 2025? We assembled an elite team of F5 experts to learn more.
2025 Technology #1: WebAssembly
WebAssembly (Wasm) offers a path to portability across the hybrid multicloud estate, delivering the ability to deploy and run applications anywhere a Wasm runtime can operate.
But Wasm is more than just a manifestation of the promise for cross-portability of code. It offers performance and security-related benefits while opening new possibilities for enriching the functionality of browser-based applications.
In 2025, WebAssembly in the browser isn’t expected to undergo drastic changes. The main developments are happening outside of the browser with the release of WASI (WebAssembly System Interface) Preview 3. This update introduces async and streams, solving a major issue with streaming data in various contexts, such as proxies. WASI Preview 3 provides efficient methods for handling data movement in and out of Wasm modules and enables fine-tuned control over data handling.
Additionally, the introduction of async will enhance composability between languages, allowing for seamless interactions between async and sync code, especially beneficial for Wasm-native languages. As WASI standards stabilise, we can expect a significant increase in Wasm adoption, providing developers with robust tooling and a reliable platform for building on these advancements.
Assuming Wasm can solve some of the issues inherent in previous technologies, it would shift the portability problems 95% of organizations struggle with today to other critical layers of the IT tech stack, such as operations.
Racing to meet that challenge is generative AI and the increasingly real future that is AIOps. This fantastical view of operations—changes and policies driven by AI-based analysis informed by full-stack observability—is closer to reality everyday thanks to the incredible evolutionary speed of generative AI.
Oscar Spencer, Principal Engineer, F5
2025 Technology #2: Agentic AI
Autonomous coding agents are poised to revolutionise software development by automating key tasks such as code generation, testing, and optimisation. These agents will significantly streamline the development process, reducing manual effort and speeding up project timelines. Meanwhile, the emergence of Large Multimodal Agents (LMAs) will extend AI capabilities beyond text-based search to more complex interactions.
As AI agents reshape the internet, we will see the development of agent-specific browsing infrastructure, designed to facilitate secure and efficient interactions with websites. This could disrupt industries like e-commerce by automating complex web tasks, leading to more personalised and interactive online experiences.
However, as these agents become more integrated into daily life, new security protocols and regulations will be essential to manage concerns related to AI authentication, data privacy, and potential misuse.
By 2028, it is expected that a significant portion of enterprise software will incorporate AI agents, transforming work processes and enabling real-time decision-making through faster token generation in iterative workflows. This evolution will also lead to the creation of new tools and platforms for agent-driven web development.
The truth is that to fully exploit the advantages of AI, you need data— and a lot of it. That’s a significant challenge given that nearly half (47%) of organisations admit to having no data strategy for AI in place. The amount of data held by an organisation—structured, unstructured, and real-time metrics—is mind-boggling. Simply cataloguing that data requires a significant investment.
Laurent Quérel, F5 Distinguished Engineer
2025 Technology #3: Data classification
Roughly 80% of enterprise data is unstructured. Looking ahead, generative AI models will become the preferred method for detecting and classifying unstructured enterprise data, offering accuracy rates above 95%. These models will become more efficient over time, requiring less computational power and enabling faster inference times. Solutions like Data Security Posture Management (DSPM), Data Loss Prevention (DLP), and Data Access Governance will increasingly rely on sensitive data detection and classification as a foundation for delivering a range of security services. As network and data delivery services converge, platform consolidation will drive vendors to enhance their offerings, aiming to capture market share by providing comprehensive, cost-effective, and easy-to-use platforms that meet evolving enterprise needs.
The shared desire across organisations to harness generative AI for everything from productivity to workflow automation to content creation is leading to the introduction of a new application architectural pattern as organisations begin to deploy AI capabilities. This pattern expands the traditional three tiers of focus—client, server, and data—to incorporate a new AI tier, where inferencing is deployed.
James Hendergart, Sr. Dir. Technology Research, F5
2025 Technology #4: AI gateways
AI gateways are emerging as the natural evolution of API gateways, specifically tailored to address the needs of AI applications. Similar to how Cloud Access Security Brokers (CASBs) specialise in securing enterprise SaaS apps, AI gateways will focus on unique challenges like hallucinations, bias, and jailbreaking, which often result in undesired data disclosures. As AI applications gain more autonomy, gateways will also need to provide robust visibility, governance, and supply chain security, ensuring the integrity of the training datasets and third-party models, which are now potential attack vectors.
Additionally, as AI apps grow, issues like distributed denial-of-service (DDoS) attacks and cost management become critical, given the high operational expense of AI applications compared to traditional ones. Moreover, increased data sharing with AI apps for tasks like summarisation and pattern analysis will require more sophisticated data leakage protection.
In the future, AI gateways will need to support both reverse and forward proxies, with forward proxies playing a critical role in the short term as AI consumption outpaces AI production. Middle proxies will also be essential in managing interactions between components within AI applications, such as between vector databases and large language models (LLMs).
The changing nature of threats will also require a shift in how we approach security. With many clients becoming automated agents acting on behalf of humans, the current bot protection models will evolve to discriminate between legitimate and malicious bots. AI gateways will need to incorporate advanced policies like delegated authentication, behavioural analysis, and least privilege enforcement, borrowing from zero trust principles. This will include risk-aware policies and enhanced visibility, ensuring that AI-driven security breaches are contained effectively while maintaining robust governance.
Most pressing are the ability to not only address traditional security concerns around data (exfiltration, leakage) but ethical issues with hallucinations and bias. No one is surprised to see the latter ranked as significant risks in nearly every survey on the subject.
Ken Arora, F5 Distinguished Engineer
2025 Technology #5: Small Language Models
Given the issues with hallucinations and bias, it would be unthinkable to ignore the growing use of retrieval-augmented generation (RAG) and Small Language Models (SLMs). RAG has rapidly become a foundational architecture pattern for generative AI..
Organisations not already integrating retrieval augmented generation (RAG) into their AI strategies are missing out on significant improvements in data accuracy and relevancy, especially for tasks requiring real-time information retrieval and contextual responses. But as the use cases for generative AI broaden, organizations are discovering that RAG alone cannot solve some problems.
The growing limitations of LLMs, particularly their lack of precision when dealing with domain-specific or organisation-specific knowledge, are accelerating the adoption of small language models. While LLMs are incredibly powerful in general knowledge applications, they often falter when tasked with delivering accurate, nuanced information in specialised fields. This gap is where SLMs shine, as they are tailored to specific knowledge areas, enabling them to deliver more reliable and focused outputs. Additionally, SLMs require significantly fewer resources in terms of power and computing cycles, making them a more cost-effective solution for businesses that do not need the vast capabilities of an LLM for every use case.
SLMs currently tend to be industry-specific , often trained on sectors such as healthcare or law. Although these models are limited to narrower domains, they are much more feasible to train and deploy than LLMs, both in terms of cost and complexity. As more organizations seek solutions that better align with their specialized data needs. SLMs are expected to replace LLMs in situations where retrieval-augmented generation methods alone cannot fully mitigate hallucinations. Over time, we anticipate that SLMs will increasingly dominate use cases where high accuracy and efficiency are paramount, offering organizations a more precise and resource-efficient alternative to LLMs.
Lori MacVittie, F5 Distinguished Engineer
Looking ahead: beyond transformers
Transformer models, while powerful, have limitations in scalability, memory usage, and performance, especially as the size of AI models increases.
As a result, a new paradigm is emerging: converging novel neural network architectures with revolutionary optimization techniques that promise to democratise AI deployment across various applications and devices.
The AI community is already witnessing early signs of post-transformer innovations in neural network design. These new architectures aim to address the fundamental limitations of current transformer models while maintaining or improving their remarkable capabilities in understanding and generating content.
Among the most promising developments is the emergence of highly optimized models, particularly 1-bit large language models. These innovations offer dramatic reductions in memory requirements and computational overhead while maintaining model performance despite reduced precision.
The impact of these developments will cascade through the AI ecosystem. Models that once demanded substantial computational resources and memory will operate efficiently with significantly lower overhead. This optimisation will trigger a shift in computing architecture, with GPUs potentially becoming specialized for training and fine-tuning tasks while CPUs handle inference workloads with newfound capability.
These changes will catalyse a second wave of effects centred on democratisation and sustainability. As resource requirements decrease, AI deployment will become accessible to various applications and devices. Furthermore, infrastructure costs will drop substantially, enabling edge computing capabilities that were previously impractical. Simultaneously, the reduced computational intensity will yield environmental benefits through lower energy consumption and a smaller carbon footprint, making AI operations more sustainable.
These developments will enable unprecedented capabilities in edge devices, improvements in real-time processing, and cost-effective AI integration across industries. The computing landscape will evolve toward hybrid solutions that combine different processing architectures optimised for specific workloads, creating a more efficient and versatile AI infrastructure.
Kunal Anand, Chief Innovation Officer
Image Credit: F5