Unlocking the Future of AI: NVIDIA H100 GPU Instances on Microsoft Azure with Anjuna Seaglass

Unlocking the Future of AI: NVIDIA H100 GPU Instances on Microsoft Azure with Anjuna Seaglass
Bobbie Chen
Senior Product Manager
Published on
Oct 17, 2024
Microsoft Azure recently made a major announcement: the general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs.
https://www.anjuna.io/blog/unlocking-the-future-of-ai-nvidia-h100-gpu-instances-on-microsoft-azure-with-anjuna-seaglass

Microsoft Azure recently made a major announcement: the general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs. New Azure VM types and sizes are released often, but this release is especially significant: it brings groundbreaking capabilities for confidential high-performance AI and machine learning use cases.

In this post, we will dive into the significance of NVIDIA Confidential GPU availability on Microsoft Azure, how it transforms AI and machine learning, and how Anjuna supports this evolution with our Universal Confidential Computing solutions for enterprise and government customers.

What Makes the H100 GPU Special?

The NVIDIA H100 GPU is an incredibly-powerful chip for number-crunching, including generative AI, machine learning, and high-performance computing. The H100 has over 14,000 CUDA Cores (more than double than the A100) and 456 upgraded fourth-generation Tensor Cores, resulting in up to 4X or more performance improvements. But the H100 is not just a hardware upgrade with bigger numbers than its predecessors; it actually enables totally new use cases through its support of Confidential Computing. 

The H100 is the first GPU to support Confidential Computing, also known as hardware-based trusted execution environments (TEEs) or secure enclaves. Prior to the H100, Confidential Computing was limited to CPU-only use cases. For example, you could serve ML models for inference or train models on CPU-only Confidential Computing, but it was impractical to train the larger models or use generative AI large language models (LLMs) due to performance. Now that Azure provides generally-available NVIDIA Confidential GPUs (paired with AMD SEV-SNP Confidential CPUs), enterprises can run AI/ML with full confidentiality at scale.

Use Cases Unlocked by H100 GPUs in Azure

The general availability of Azure Confidential VMs with NVIDIA H100 GPUs unlocks a variety of new use cases across industries, helping organizations tackle some of their most complex challenges. Here are a few examples of where the H100 makes a transformative impact:

  1. Large Language Models (LLMs) and Generative AI
    State-of-the-art LLMs are revolutionizing industries today - from content creation and customer support to financial analysis, biotech analysis, and more. While applications like ChatGPT, Claude, and GitHub Copilot are wildly popular among consumers, their enterprise adoption is often blocked by security and privacy concerns about sending critical business data to a vendor. Open-weight models like Meta’s Llama and Mistral are catching up in performance, but fine-tuning and retrieval-augmented generation (RAG) are also needed to tailor results to a given use case.
    With the Confidential Computing abilities of H100 GPUs, it is now possible to run LLMs in a fully secure environment with end-to-end data protection at the hardware level. Enterprises don’t need to choose between cutting-edge performance and data security anymore. They can protect the entire generative AI system and reap the benefits of secure analysis of internal data, better customer experiences, and reduced costs. For external-facing use cases, businesses can earn customer trust by ensuring their privacy - just ask Apple about Private Cloud Compute.
  2. Licensing Proprietary AI Models and Training Data
    At Anjuna, we help software vendors license proprietary AI models without losing control of their intellectual property. Now with H100s, you also have the opportunity to license private training data for AI and ML models. Private data is only released to an attested Confidential Computing environment for the sole purpose of model training, which ensures that data buyers can’t exfiltrate the data and use it for other purposes. It works for both generative AI model fine-tuning, and traditional AI model training, creating new monetization opportunities for enterprises.
  3. Compliance and Regulation for AI and Machine Learning
    Regulators are increasingly recognizing the benefits of Confidential Computing and other privacy-enhancing technologies (PETs). For example, a 2023 report from the UK Information Commissioner’s Office (the national data protection authority for the United Kingdom) says this about Confidential Computing:
    • [TEEs ensure] that the information is protected from disclosure and provides a level of assurance of data integrity, data confidentiality, and code integrity. Therefore, this can help you to comply with both the security principle and the requirements of data protection by design, depending on your context.

      In addition, TEEs can assist with your data governance. For example, they can provide evidence of the steps you take to mitigate risks and enable you to demonstrate that these were appropriate. This can help you to comply with the accountability principle.

      TEEs also have wider benefits. For example, they can provide strong manufacturing and supply chain security. This is because TEE implementations embed devices with unique identities via roots of trust (ie a source that can always be trusted within a cryptographic system). These enable key stakeholders in the supply chain to identify whether the device they are interacting with is authentic.

Anjuna's Support for NVIDIA H100 GPUs

Anjuna is a proud partner of Microsoft Azure, NVIDIA, and AMD; we work together to ensure full data security and privacy for our customers. Anjuna’s two offerings build on top of the infrastructure provided by Microsoft Azure, NVIDIA, and AMD to create a seamless experience:

Anjuna Seaglass: Universal Confidential Computing

The Anjuna Seaglass platform makes it easy to securely run any application with Confidential Computing. It adds features beyond the base infrastructure, like built-in remote attestation with secret injection, Pod-level Kubernetes support, and confidential disk protection. Whether you're running a machine learning model, a financial algorithm, or processing sensitive customer data, Seaglass enables you to do so in a fully secure environment without the need for extensive rearchitecting - including support for NVIDIA H100 Confidential Computing.

Anjuna Seaglass AI Clean Rooms: Limitless AI-driven Collaboration

Anjuna Seaglass AI Clean Rooms, now in private preview, tackles the problem of data collaboration across organizations. Unlike traditional legal contract-based controls or limited legacy clean rooms, Seaglass AI Clean Rooms ensures that you can securely combine data and extract valuable insights, without ever putting that data at risk. The solution is tailor-made for cross-enterprise data problems with secure uploads, policy controls, web UI, and API integrations. LLMs and other high-powered AI/ML models can run performantly with the power of Confidential GPUs.

U.S. Navy: Confidential AI Enabled by NVIDIA H100 and Anjuna Seaglass

Anjuna’s recent work with the U.S. Navy demonstrates the power of Confidential Computing with H100 GPUs. The Navy was interested in adopting large language models (LLMs), but was concerned about potentially exposing highly sensitive data. 

Using Anjuna Seaglass, the U.S. Navy was able to deploy cutting-edge LLMs with full Confidential Computing protection in just one hour. The U.S. Navy also confirmed top performance for their use case, thanks to the acceleration of NVIDIA H100 GPUs. This collaboration demonstrates how Anjuna Seaglass can enable Confidential GPUs to meet the needs of the most demanding organizations.

Looking Ahead: Empowering Our Customers with Confidential GPUs

The future of secure and private AI is bright, and the introduction of NVIDIA H100 GPU instances on Microsoft Azure is just the beginning. At Anjuna, we are excited to lead the charge, enabling our customers to gain powerful new capabilities without sacrificing data protection or performance.

We are committed to supporting our customers as they build the next generation of AI-powered solutions. Organizations can immediately collaborate with sensitive data and code using Anjuna Seaglass AI Clean Rooms, or build and adapt their own custom systems using Anjuna Seaglass. Together, we will scale and secure the software systems of the future. 

From training LLMs to enabling secure data collaboration, the opportunities ahead are promising. We invite you to explore how Anjuna can help your organization harness the power of Confidential AI today. To discuss Confidential GPUs, your organization’s needs, and the private preview of Anjuna Seaglass AI Clean Rooms, contact Bobbie Chen at bobbie.chen@anjuna.io .

More like this
Get Started Free with Anjuna Seaglass

Try free for 30 days on AWS, Azure or Google Cloud, and experience the power of intrinsic cloud security.

Start Free