top of page

Why the Network is the Deciding Factor Between Public Cloud and Repatriation

  • 11 hours ago
  • 6 min read

This guest post was contributed by Cameron Daniel, CTO of Megaport


Cameron Daniel

​​For more than two decades, alongside the growth of modern SaaS and elastic compute, the public cloud was seen as the inevitable destination for enterprise workloads. The conversation has since evolved into a nuanced discussion about placement. Data shows that many IT leaders are now looking at their cloud environments through a more critical lens, weighing the benefits of hyperscale agility versus the security of private infrastructure.


This has led to a rise in cloud repatriation: the process of moving workloads from public cloud environments back to on-premises data centers, hosted private clouds, or bare metal infrastructure. Cloud repatriation indicates a market correction. Organizations are moving from a "cloud-first" strategy toward a "cloud-smart" strategy, where the main goal is matching each specific workload to the environment that best supports its technical and financial requirements. The question is, how do they decide if repatriation is the right choice for their business?


The Catalysts for Change


The decision to stay in the cloud or repatriate back to private infrastructure is rarely motivated by a single factor. While cost management remains a top concern, the issues triggering these conversations around cloud architecture have evolved far beyond cost cutting. Many organizations feel a greater sense of ownership when their information lives in private ecosystems or bare metal infrastructure. These alternatives offer a level of control that can be harder to achieve within the abstracted layers of a public cloud. 


However, the public cloud remains a top choice for organizations that prioritize rapid global expansion, high-level managed services, and the ability to experiment with new technologies without a lot of spend upfront. The choice depends entirely on whether an organization prioritizes the flexible nature of the cloud, or the depth of visibility of private hardware.


While some public providers have introduced "Sovereign Cloud" models to address industry security concerns, the public versus private debate is accelerating in the current geopolitical climate. Enterprises based outside of the US are looking to increase their reliance on local and regional cloud providers, especially in Europe and across Asia where data sovereignty and regulatory frameworks are demanding stricter data locality and auditability. As security and data ownership become forefront in the minds of CEOs and CIOs, much more so than 20 years ago, the complexity and sprawl of the cloud environments makes it more difficult to tick some of those boxes when compared to the control of private hardware.


The Complexity of Infrastructure Management


One of the most common challenges for IT leadership is not choosing between environments, but managing the trade-offs between them. Many IT leaders see the cost overruns, compliance complexities and infrastructure sprawl when they’re on the cloud, and they see a clear business case for repatriation. Yet, what catches them off guard are the practical obstacles that show up when they actually try to pull off the migration. 


Both public and private paths introduce unique layers of complexity. As security and data sovereignty move to the top of corporate strategy, the sprawl of a multicloud or hybrid environment can make it difficult to maintain compliance and visibility regardless of where the data resides.


In public cloud environments, the underlying network often fades into the background. Users become familiar with a level of abstracted reachability where capacity is managed by the provider. But, the moment an organization begins to connect multiple clouds or attempts to bridge a cloud environment back to a data center, the network capacity conversation needs to be top of mind again. This transition from a "hands-off" network experience back to active infrastructure management is a critical learning curve for teams used to the cloud’s automatic streamlining.


The Challenges of Capacity and Provisioning


The most overlooked obstacle in the repatriation journey is the return to server and network capacity planning. When operating in a public cloud, overprovisioning is a financial risk but rarely a functional one because the resource pool is almost bottomless. In a private environment, the stakes are higher. IT teams must have the ability to calculate appropriate infrastructure sizing upfront, forecast long-term growth needs, and maintain sufficient buffers without wasting capital on idle hardware.  


In addition, sourcing the right equipment and securing enough bandwidth from fiber providers can be unexpectedly challenging. Many teams discover that they lack the necessary network infrastructure to support a high-volume migration back to the data center. Before any data begins to move, an organization must have an understanding of its bandwidth requirements and the lead times associated with physical hardware procurement. For those staying in the cloud, the challenge is ensuring that the ease of configuration does not lead to runaway costs.


The Mapping Mandate


Whether an enterprise is moving to the cloud for the first time or planning a move back to on-premises, the first step should always involve a strict period of evaluation. This starts with an honest and comprehensive inventory. 77% of IT teams lack full visibility across on-prem and cloud environments, operating without a clear roadmap of their workloads, dependencies, and data flows. But understanding their company’s actual compute use is essential for making an informed repatriation decision.


Once the inventory is complete, the focus needs to shift to the Total Cost of Ownership (TCO). Not just the cloud bill, but the tooling, compliance, and migration costs to bring the workload back on prem or to another provider. A common mistake in both cloud migrations and repatriation projects is focusing solely on the monthly provider bill or the hardware invoice. A true TCO analysis accounts for the costs of new tooling, compliance audits, and the significant expense of the migration process itself. Perhaps most importantly, it must include the cost of the people involved.


The Human Element and Internal Skills


Infrastructure strategy is not purely a technical project. The success of any environment depends on whether the internal team has the skills to manage it. Over the last decade, many IT departments have seen a shift in their hiring practices toward cloud-native skill sets. Companies that haven’t traditionally hired software engineers are doing so now, giving teams a considerable amount of autonomy to manage their own resources. While this drives innovation, it can also result in a loss of the foundational networking and hardware management skills required to run private infrastructure efficiently. On top of that, it can also lead to more sprawl which is an issue that network and internal IT teams need to figure out how to manage. 


If an enterprise lacks the internal expertise to execute a large-scale migration or manage complex on-premises stacks, the project can quickly spiral out of control. Any CIO considering a major shift should never underestimate the people side of the equation. It is vital to ensure that the necessary skills are in-house and that those skills are tested through trial runs before any major transition. 


Identifying the Best Fit for Each Workload


Not every workload belongs in a private data center, and not every workload belongs in the cloud. The cloud remains an excellent tool for specific use cases, particularly those that require rapid scaling, have highly unpredictable traffic patterns, or benefit from specific AI and machine learning services offered by hyperscalers.


The best candidates for repatriation are workloads with predictable resource requirements and consistent utilization. A thorough mapping exercise often reveals that fewer workloads are actually utilizing the scaling functionality of the cloud than initially assumed. If a workload sits at a steady state of consumption year-round, the premium paid for the scaling functionality of the cloud may not be needed. While static workloads are a prime target for a more controlled private environment, unpredictable, high-capacity workloads should likely remain in the cloud to leverage its inherent flexibility.


A Market Correction Toward Balance


Is this movement toward repatriation a sign of the cloud’s decline? Likely not. Instead, it represents a maturation of the market. The initial wave of cloud adoption was a result of an industry-wide push to modernize legacy infrastructure. It was further fueled by the high-level optimism around innovation and cost-savings that often accompanies major tech shifts. While earlier migrations were sometimes rushed or reactive, the current change reflects a more measured, balanced approach to infrastructure and network capacity. 


Enterprises are no longer moving to the cloud because it is the "default" choice, they are making strategic decisions based on where a specific workload functions most efficiently, securely, and cost-effectively. The move toward a hybrid model where repatriation is a viable option suggests that enterprises are reclaiming their autonomy and recognizing that the cloud is a tool, not a mandatory destination. By focusing on data sovereignty, rigorous capacity planning, and the development of internal talent, organizations can build a strategy that truly serves their business goals rather than one dictated by a trend or hype wave.


Success requires a return to the fundamentals of networking and a clear-eyed view of what it truly takes to manage an environment, regardless of where the servers are located. 


About the author: 

As one of Megaport’s founding engineers and then VP of Technology, Cameron has played an integral role in the development of Megaport’s market-leading networking solutions. With 15 years of experience in the telecommunications industry, he has held prior positions at PIPE Networks, TPG, and Telia Carrier (now Arelion). Cameron’s extensive industry experience and strong technical acumen have given him a leading edge in building and managing teams to develop and deliver meaningful, innovative products to market.

bottom of page