"Data in memory security" is a phrase that is increasingly appearing in new regulations and cloud computing security guidance worldwide. In the United States, the top industry security agency, CISA, recommends techniques to protect data in memory in critical 5G infrastructure, where memory is vulnerable to attacks. In the European Union, the European Banking Authority (EBA), the top-level banking regulator, recently published guidance for outsourcing, including cloud services, and specifically emphasizes the need for "data in memory" protection to ensure data processing resilience. In the United Kingdom, the EBA guidance aligns with the Prudential Regulatory Authority's guidance for the financial services industry, which covers agents, banks, insurers, clearinghouses, and other entities, highlighting the importance of robust "data in memory" protection measures that are being enforced. Further jurisdictions like Singapore, through its regulator MAS, also impose requirements on businesses to safeguard sensitive data in memory. This emerging trend is gaining momentum, raising the question of why it is happening and what organizations can do to protect data in memory, with Confidential Computing being at the forefront of the recommended technologies.
The reason behind this regulatory shift is grounded in the risks associated with data in memory processing, the increasing reliance on outsourced (cloud) computing across all industries, and the need to ensure resilience of critical infrastructure operations. Traditionally, data protection has focused on securing data at rest and data in transit to prevent unauthorized access to memory sticks, drives, email, files, server storage, databases, and cloud storage. It goes without saying that storing or transmitting unencrypted data would instantly expose it to theft.
Working memory (RAM) has, unfortunately, lacked sufficient protection due to the inherent challenge of simultaneously encrypting data while allowing it to be readable by the CPU. In order for data to be processed or accessed by the CPU, it must be decrypted, which leaves it vulnerable in cleartext. Despite the awareness of these risks for some time, they have become particularly prevalent in the past decade, especially in the context of e-commerce credit card processing. Malware has been able to rapidly steal cardholder data as soon as it enters memory, causing significant concerns for Chief Information Security Officers (CISOs) in the retail and payment industry.
It is important to note that the risks associated with working memory extend beyond credit card data and encompass all code and data residing in memory. In modern cloud applications, an attacker with sufficient privileges to the infrastructure can extract memory contents, giving them access to a wide array of sensitive information, including keys, secrets, code, Personally Identifiable Information (PII), policy controls, and more. This comprehensive access opens the door to a multitude of potential attacks. Software-based solutions or tokenization of data in memory are insufficient in effectively addressing this issue. Even if data is tokenized or protected within software, the keys or token tables are often stored in memory and can be extracted for future unauthorized access to the data. Without robust hardware controls, such as those provided by secure enclaves, both software and memory remain vulnerable to manipulation and exploitation.
Adding to the complexity, numerous leading businesses are now shifting more workloads, Artificial Intelligence (AI), data lakes, and other applications to run primarily in memory to leverage its exceptional speed. While this transition can yield positive outcomes such as improved customer experiences, business agility, and personalized digital engagement to meet the demands of discerning consumers, it also introduces new risks that regulatory bodies are actively monitoring.
Here are several scenarios that highlight the risks associated with data in memory:
- CISA has provided valuable guidance on risk reduction to ensure the overall security of the nation. In a recent report, CISA documented a red team attack on a large enterprise and highlighted memory attacks as a method to retrieve keys. By exploiting credentials stored in memory, the red team accessed multiple systems without triggering any alarms. This pattern is commonly observed: stealing credentials from memory, masquerading as privileged administrators, and ultimately gaining access to sensitive data and code. “The server administrator relied on a password manager, which stored credentials in a database file. The red team pulled the decryption key from memory using KeeThief and used it to unlock the database [T1555.005].“
- The White House has issued guidance related to enhancing resilience in the nation's software supply chain. Hardware-based techniques, such as Confidential Computing and its Trusted Execution Environments (TEE), can play a vital role in addressing this issue by ensuring that only trusted code runs on trusted hardware with memory protection. This concept, known as attestation, mitigates the risk of unauthorized or tampered code infiltrating critical software supply chains, as was the case in the SolarWinds breach.
- The NSA recently published guidance on transitioning to memory-safe programming languages. While this recommendation primarily aims to address vulnerabilities associated with languages like C and C++, where memory overflow and pointer errors are common vectors of exploitation, the ultimate solution lies in isolating memory through hardware-based protections. By confining memory-related issues within a secure hardware boundary, the risks can be effectively contained. It is clear that the NSA recognizes the risks associated with memory.
- In the energy sector and embedded edge systems, nation-state attackers leverage specific malware, such as TRITON, to modify firmware and gain access to running memory. CISA has also recently documented such incidents, noting that TRITON malware modifies in-memory firmware in Triconex Tricon PLCs, enabling attackers to read, modify, and execute custom code while disabling the safety system. Memory attacks are not limited to enterprises but can occur across various sectors.
- Within every enterprise, when applications crash and Linux memory core dumps occur, they often contain a wealth of sensitive data. This can include API credentials used to access critical devices such as Cryptographic Hardware Security Module (HSM) APIs, short-lived keys utilized for transaction signing or data encryption, personally identifiable information (PII), and other sensitive data that passes through the system's memory. While core dumps serve an important purpose in diagnosing issues, they also present a significant risk (e.g., MITRE attack T1103) if they end up in the wrong hands. Alarmingly, generating a memory dump file is a straightforward process, and attackers can invoke simple Linux commands like "gcore" to capture the contents of memory, thereby exposing the sensitive code and data that were previously stored in memory.
So, how can enterprises stay ahead of the growing regulatory pressure and effectively mitigate these risks? I recently came across the UK Government's National Cyber Security Centre (NCSC), and they have caught my attention with their focus on data in memory protection and comprehensive guidance.
I highly recommend reviewing the NCSC’s Cloud Security Guidance. It's a great read that thoroughly discusses the risks, mitigations, and specifically highlights the benefits of Confidential Computing for ensuring integrity, confidentiality, and data in memory protection.
Below, I have quoted some sections that caught my attention and directly address the vulnerabilities we've been discussing. The NCSC, similar to CISA in the US, serves as the UK Government's National Cyber Security Center and offers invaluable advice on dealing with modern cyber threats. Their guidance on protecting data in the cloud or on outsourced service providers' platforms is worth noting:
The NCSC also provides an excellent summary of the benefits of Confidential Computing. I highly encourage readers to explore the provided links, but here's a snippet to give you an idea:
Great advice and benefits indeed! NCSC, however, also cautions that implementing Confidential Computing to reap these benefits can be complex and challenging. According to the NCSC, it may be necessary to develop a bespoke application to fully leverage the advantages of Confidential Computing. They recommend considering the extra complexity and ensuring it aligns with your specific threat model and application requirements.
Fortunately, a solution like Anjuna Confidential Computing Platform addresses these challenges head-on, making Confidential Computing accessible and scalable. With just a simple command line, you can instantly deploy your application in a confidential computing environment in the cloud, enjoying features such as memory protection, integrity, attestation, trust, and high performance. The best part is that with Anjuna, you don't need to modify your application to benefit from Confidential Computing technology.
If you're a CIO or CISO and your teams are concerned about the noted attacks or the growing regulatory pressure to safeguard data in memory, fret no more. Anjuna effectively addresses these challenges with a remarkably simple process. By embracing confidential computing, you can significantly reduce risk while confidently harnessing the power of the cloud to process even the most highly regulated, high-liability, and highly sensitive code and data.
[#references]To learn more about Anjuna and its capabilities, consider the following options:[#references]
- Register for our live demo to witness the technology in action.
- Read our white paper, which delves into how Anjuna simplifies compliance and addresses regulatory requirements.
References
Try free for 30 days on AWS, Azure or Google Cloud, and experience the power of intrinsic cloud security.
Start Free