Morgan Stanley released a comprehensive research report on April 20, 2026, detailing a fundamental shift in semiconductor demand driven by the rapid evolution of agentic AI. The firm’s analysts, led by Managing Director Joseph Moore, stated that while graphics processing units (GPUs) remain the primary engine for large language model training, the transition toward autonomous agents is necessitating a significant rebalancing of data center architecture. These AI agents, designed to perform multi-step reasoning and execute tasks across various software environments, require a higher degree of serial processing power typically provided by central processing units (CPUs).
According to the report, titled The Agentic Era: Rebalancing the Silicon Stack, the capital expenditure mix in hyperscale data centers is projected to undergo a structural change. Morgan Stanley estimates that by 2028, the ratio of CPU-to-GPU spending will shift from the 2024-2025 peak of 85 percent GPU dominance toward a more balanced 70/30 or 65/35 split. This adjustment is attributed to the orchestration layer of agentic AI, which functions as a control center that manages complex workflows, executes API calls, and handles decision-making loops. These specific tasks are logic-heavy and benefit from the high clock speeds and sophisticated branch prediction of modern CPUs.
The analysts noted that agentic AI requires substantial pre-processing and post-processing logic that GPUs are not architecturally optimized to manage. Moore highlighted that as AI agents begin to interact more frequently with legacy enterprise software and external databases, the demand for high-performance x86 and ARM-based server CPUs will accelerate. The report specifically identifies that next-generation server platforms, such as Intel’s Clearwater Forest and AMD’s Turin-based EPYC processors, are being integrated into data center designs to handle the increased control-plane traffic generated by autonomous agents.
Morgan Stanley’s data suggests the total addressable market for server CPUs within AI-centric data centers could reach 45 billion dollars by 2028, an upward revision from previous estimates of 34 billion dollars. The firm also emphasized the role of custom silicon, noting that cloud service providers are increasingly deploying ARM-based processors like Amazon’s Graviton4 and Microsoft’s Cobalt to manage the power-intensive requirements of agentic orchestration. The report indicates that these chips provide the necessary power efficiency for the always-on nature of background AI agents.
The report concludes that the initial phase of the AI build-out, which focused almost exclusively on raw compute for training, is maturing. As the industry moves toward functional agents that execute end-to-end business processes, the underlying hardware must evolve to support the logic-intensive requirements of these systems. Morgan Stanley expects this trend to provide a sustained tailwind for CPU manufacturers and designers who can bridge the gap between traditional compute and accelerated AI workloads.