He adds, âThis is behind the drive to generative AI by the cloud providers. Itâs a race to determine who owns this space and the ability to de-commoditize their services with this new technology layered on top of more traditional cloud services.â At this early stage in the gen AI race, thereâs no clear leader, but all the players are pouring resources into the fray. Microsoft, which bankrolled OpenAI to the tune of $10 billion, has embedded ChatGPT features into everything from productivity apps like Word and Excel, to its Edge browser, to a cloud offering aimed at enterprises, the Azure OpenAI Service. Google is racing to build out its gen AI platform; co-founders Sergey Brin and Larry Page have even come out of semi-retirement to jumpstart the Google AI initiative. Google has its own large language model called PaLM, is building its own AI chips (Tensor Processing Units), and is launching new industry specific AI-based services under the Vertex AI banner. Most recently the company launched gen AI-based services aimed at healthcare and life science organizations. And AWS recently announced Bedrock, a fully managed service that enables enterprise software developers to embed gen AI functionality into their programs. AWS also builds its own low-cost AI chips (Inferentia and Trainium) in limited volumes; the company is using the chips internally to power its gen AI capabilities, and it is making the chips available to customers. While generative AI is certainly the hottest trend in the cloud market, there are others that CIOs need to be aware of. Here are the top cloud market trends and how they are impacting CIOâs cloud strategies. The gen AI gold rush â with little clarity on cost âItâs the year of AI,â declares Forrester Research. âEvery hyperscaler, SaaS provider, and startup hopes to use the buzz around AI to their advantage. Cloud providers are pushing AI services to break out of sluggish revenue growth and differentiate themselves from rivals. Enterprise cloud customers are eager to use AI wherever they can for strategic initiatives, but without busting IT budgets already under pressure from multicloud complexity and sprawl.â The Big 3 hyperscalers arenât the only players offering generative AI-based cloud services to enterprise IT. IBM is stepping up with its Open-Stack-based watsonx AI platform. And Nvidia, which supplies everybody with the vast majority of their generative AI chips (GPUs), has built its own full-stack cloud platform called DGX Cloud, an AI service that lives inside the Oracle cloud, and will soon be available on both Azure and Google Cloud. For CIOs, this means there will be multiple cloud-based options for building generative AI functionality into existing business processes, as well as creating new AI-based applications. The challenge, says Bernard Golden, executive technical advisor at VMware, is how to make sure sensitive corporate data is protected and kept out of the general pool that makes up LLM databases. Linthicum adds that generative AI-based apps will be âcostly to run,â so âCIOs need to find the proper use cases for this technology.â And for CIOs looking to make the most of gen AI capabilities built into the cloud offerings they depend on, initial explanations as to how pricing will work have been rather opaque. Cloud price creep â with leaps thanks to AI IBM caused quite a stir when it announced price increases for storage services that ranged as high as 26%, as well as smaller price hikes for IaaS and PaaS offerings. Generally speaking, however, cloud providers have held the line on price increases in order to remain competitive. But the slowdown in growth across the industry will likely put pressure on all cloud vendors to hike prices going forward. As Linthicum says, âWeâre entering the phase of technology when they need to harvest value from their investments. I would suspect that prices will creep up over the next several years.â Of course, the benefit of using cloud services is that customers can select whatever infrastructure configuration suits their needs. If they choose a first-generation processor, there are values to be had. But for organizations that need high-performance computing, or organizations looking to reap the benefits of AI, selecting a newer model chip comes at a premium. For example, choosing to run your workload on an Nvidia H100 chip versus an earlier model A100 will result in a price increase of more than 220%, says Drew Bixby, head of operations and product at Liftr Insights. And as the hyperscalers add more GPUs (which are exponentially more expensive than traditional CPUs) to the mix in their own data centers, those costs will likely be passed on to enterprise customers. Industry clouds ripe for gen AI edge Industry clouds are on the rise â and will benefit from the emergence of generative AI, says Brian Campbell, principal at Deloitte Consulting, who explains that industry clouds âtend to be at the forefront of both business and technology executiveâs agendas.â Tech execs like the speed, flexibility, and efficiency that industry-specific clouds provide, and business leaders appreciate the ability to focus scarce internal resources on areas that enable them to differentiate their business. Early adopters of industry clouds were healthcare, banking, and tech companies, but that has expanded to energy, manufacturing, public sector, and media. Campbell adds, âWith the recent explosion of gen AI, executives are increasingly looking at how to use gen AI beyond proofs-of-concept, thus turning to the major providers of industry clouds, hyperscalers, independent software vendors, and systems integrators who have been quickly embedding gen AI alongside other technologies in their offerings.â Line between cloud, on-prem blurs The old paradigm of a clear line of demarcation between cloud and on-prem no longer exists. There are many terms that apply to this phenomenon of cloud-style services being deployed in a variety of scenarios all at once â hybrid cloud, private cloud, multicloud, edge computing, or as IDC defines it, dedicated cloud infrastructure as a service (DCIaaS.) IDC analyst Chris Kanaracus says, âWe increasingly see the cloud as not about a particular location, but rather a general operating model for IT. You can have the cloud anywhere in terms of attributes such as scalability, elasticity, consumption-based pricing, and so on. The challenge moving forward for CIOs is to stitch it all together in a mixed-vendor environment.â For example, AWS offers Outposts, a managed service that enables customers to run AWS services on-premises or at the edge. Microsoft offers a similar service called Microsoft Azure Stack. Traditional hardware vendors also have as-a-service offerings that can run in data centers or at the edge: Dell Apex and HPE GreenLake. Increased interoperability as lock-in loses some luster Competing cloud vendors arenât particularly incentivized to enable interoperation. The business model for cloud providers is to lock in a customer, get them used to that particular vendorâs tools, processes, marketplaces, software development platforms, etc., and continue to encourage that customer to move more resources to their cloud. But enterprise customers have overwhelmingly adopted a multicloud approach and cloud vendors have been forced to deal with that reality. For example, Microsoft and Oracle recently launched Oracle Database@Azure, which allows customers to run Oracleâs database services on Oracle Cloud Infrastructure (OCI) and deploy them in Microsoft Azure datacenters. And storage leader NetApp recently announced a fully managed service that enables customers to seamlessly bring business-critical workloads across both Windows and Linux environments to the Google Cloud without refactoring code or redesigning processes. As these barriers to interoperability come down, enterprises will benefit by being able to move storage volumes and applications to the most appropriate cloud platform. Rise of the citizen developer There has always been a tension between traditional IT and so-called shadow IT. The emergence of low-code, no-code solutions has made it easier for non-IT staffers to build simple applications. For example, Microsoftâs Power Platform enables the creation of mobile and web apps that can interact with business tools. But ChatGPT has blown any technical constraints out of the water. For example, with Microsoftâs Copilot, end users can write content and create code with a simple prompt. For IT leaders, this can be a double-edged sword. Itâs beneficial to the organization if employees can boost their productivity through the creation of new tools and software programs. But Golden points out that tools like Copilot are âgreat until theyâre not great.â In other words, these simple, one-off applications created by citizen developers can create security risks, they arenât built to scale, and they donât necessarily interoperate with complex business processes. FinOps gains traction and tools During the pandemic, there was a âmad dashâ of enterprises shifting workloads to the cloud in order to make them more easily accessible to remote workers. âNow they are getting the big bills,â Linthicum says. As a result, organizations are adopting FinOps technology to manage and optimize cloud costs. Linthicum says that FinOps enables organizations to reduce technical debt and âdrive more cost savings by normalizing the use of cloud resources. In essence, it fixes mistakes that were made in the past, such as the use of the wrong cloud services, too much data movement, etc.â Forrester researchers concur, noting that, âwhenever economic headwinds hit, IT cost optimization gains momentum. For cloud cost management, high interest hit in 2018 and once again this year.â The good news for IT is that all of the cloud providers offer FinOps services and there is a slew of third-party software vendors that offer cloud cost management tools. |