Oracle announced new Oracle Cloud Infrastructure (OCI) distributed cloud innovations to help meet the rapidly growing global demand for its AI and cloud services. The latest innovations span Oracle Database@AWS, Oracle Database@Azure, Oracle Database@Google Cloud, OCI Dedicated Region, and OCI Supercluster. With Oracle’s distributed cloud, customers can deploy OCI’s 150+ AI and cloud services at the edge, in their own datacenter, across clouds, or in the public cloud and can help address a variety of data privacy, sovereign AI, and low latency requirements. In addition, Oracle Cloud is available in more locations than any other hyperscaler, with 85 regions live and 77 planned.
“Our priority is giving customers the choice and flexibility to leverage cloud services in the model that makes the most sense for their business,” said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. “With OCI’s distributed cloud capabilities, we’re helping customers deploy a dedicated cloud in a small, scalable footprint, build applications with the best services across cloud providers, and deploy AI infrastructure anywhere they want. This flexibility helps our customers address their unique needs and support their cloud investments in delivering significant business value.”
Oracle Offers Largest AI Supercomputer in the Cloud
OCI is now taking orders for the largest AI supercomputer in the cloud—available with up to 131,072 NVIDIA Blackwell GPUs—delivering an unprecedented 2.4 zettaFLOPS of peak performance. The maximum scale of OCI Supercluster offers more than three times as many GPUs as the Frontier supercomputer and more than six times that of other hyperscalers. OCI Supercluster includes OCI Compute Bare Metal, ultra-low latency RoCEv2 with ConnectX-7 NICs and ConnectX-8 SuperNICs or NVIDIA Quantum-2 InfiniBand-based networks, and a choice of HPC storage.
New OCI AI Infrastructure Enables Training and Inferencing for AI Sovereignty
Oracle and NVIDIA are delivering accelerated computing and generative AI services that help organizations build and maintain sovereign AI models in their country or retain AI-generated data with strong data residency controls.
OCI Compute based on NVIDIA L40S GPUs, NVIDIA Hopper architecture GPUs, and NVIDIA Blackwell platforms are now orderable to support sovereign AI deployments. NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and generative physical AI, is validated on OCI and available to support many developer use cases, including sovereign AI.
New OCI Dedicated Region25 configuration brings AI and cloud services to more customers
A new OCI Dedicated Region configuration—Dedicated Region25—will be available in a smaller, scalable size starting at only three racks and rapidly deployable within weeks. Dedicated Region25 has a 75 percent smaller launch footprint and simplified datacenter requirements, supports OCI’s 150+ AI and cloud services and allows a wider range of customers to gain the agility, economics, and scale of the public cloud in their own datacenters. The new configuration will be available in the next calendar year.
Oracle Expands Multicloud Capabilities with Three Groundbreaking Hyperscaler Partnerships
OCI enables customers to combine cloud services from multiple clouds to optimize cost, functionality, and performance. With Oracle Database@AWS, Oracle Database@Azure, and Oracle Database@Google Cloud, customers gain direct access to Oracle Database services running on OCI and deployed directly in the datacenters of the four largest hyperscalers. Through these partnerships, customer have the flexibility to run their applications across clouds and combine all the benefits of Oracle Database services with services from other cloud providers for a seamless multicloud experience. In addition, customer benefit from the simplicity, security, and low latency of a unified operating environment.
In addition, to help customers accelerate AI innovation and leverage cloud services wherever they choose, Oracle has introduced the following distributed cloud innovations:
OCI Roving Edge Infrastructure enhancements support remote AI inferencing: New versions of the OCI Roving Edge Device, including a new three-GPU option optimized for AI, are now generally available to help customers leverage remote AI inferencing at the edge. This enables customers to manage mission-critical data at the edge and run domain-specific LLMs or computer vision even in remote or disconnected locations. OCI Roving Edge Infrastructure consists of multiple configurations of ruggedized and portable high-performance devices, weighing less than 35 lb. (without the case), equipped with 56 cores (102 virtual CPUs), 512 GB of RAM, and up to 123 TB of storage.
HeatWave delivers new GenAI and multicloud capabilities: New HeatWave capabilities and innovations help organizations easily and securely take advantage of generative AI both in OCI and Amazon Web Services (AWS). Additional new capabilities help customers quickly and securely implement lakehouse and machine learning applications for a wider variety of use cases, as well as help improve the performance and manageability of transactional applications.
New OCI Generative AI innovations help customers turn data into a competitive advantage: OCI Generative AI (GenAI) Agents with retrieval-augmented generation (RAG) capabilities and enhanced Oracle AI innovations are now generally available. The new capabilities help customers turn their data into a competitive advantage by making it easier to apply AI to real-world business operations.
𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲
𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 Facebook, LinkedIn, Twitter, Instagram