When the People’s Car Data Becomes the Attackers’ Car Data: Anatomy of a Data-in-Use Attack and How to Mitigate it

When the People’s Car Data Becomes the Attackers’ Car Data: Anatomy of a Data-in-Use Attack and How to Mitigate it
Mark Bower
VP Product, Anjuna
Published on
Jan 14, 2025
Volkswagen, a leading auto maker, suffered a recent data breach incident involving 800,000 customers' detailed EV and driving data.
https://www.anjuna.io/blog/anatomy-of-a-data-in-use-attack-and-how-to-mitigate-it

Volkswagen, a leading auto maker, suffered a recent data breach incident involving 800,000 customers' detailed EV and driving data. This data appears to span driver personal information, location data with an accuracy of 10cm, vehicle performance data, VIN, service data and more. This data is intimate and granular PII, including where a particular customer is located at given points in time, and where they travel. It is subject to GDPR and other privacy mandates, and clearly high risk to those impacted. Accurate location data is extremely sensitive for obvious personal reasons. It's especially delicate for drivers going to sensitive locations such as military facilities. On the flipside, this is powerful enterprise data for customer analytics, predictive maintenance, and optimal customer service. In a world where AI can drive even more value from granular data, it’s exactly this data that many organizations want to collect and analyze, with the consent of customers, to better serve them. After all, customer experience is the top business relationship driver, and EU and US auto makers can differentiate against new competitors, such as those emerging from China, in order to retain prices, customer retention, and win lucrative contracts. Of course, customer trust and experience is eroded when data is exposed. Given the sensitivity of this data, manufacturers must do more to protect it beyond traditional and basic defenses.

Recent reports indicate that this attack that enabled deeper access was the result of compromise of active memory and processes exposing clear sensitive data in operating a working JVM (Java Virtual Machine). Clear credentials were easily discovered, which permitted 9.5TB of data being exfiltrated by the researcher.

Flüpke said that he was able to retrieve the heap dump from the VW internal environment because it was not password protected. A heap dump lists various objects within a Java Virtual Machine (JVM), which can reveal details about memory usage…..Within that heap dump were listed, in plain text, various active AWS credentials.

From here the researchers were able to get deep into driver and vehicle analytics. The compromise details are in the linked article above, and on Youtube here. (The key exposure is revealed at the 9 minute mark - it is compelling viewing throughout for anyone in devops, security or application development).

The outcome of the compromise is quite stark: “we know everything about the car, and a lot about the customer” (approx 12:50 in the video). Map that knowledge to 800,000 records and this is a deep and invasive privacy compromise.

Manufacturers across industry have to gather data to optimize customer service and run analytics, but this comes with strong responsibilities and regulatory obligation to protect it, especially under GDPR. This case will be one to watch to see how the DPA’s respond given recent disclosures on scope and attack.

In the last few weeks, we've seen more and more of these types of compromise too that follow this pattern across cloud, desktop and servers - Bitlocker encryption keys extracted from memory on desktops, and US Government Agencies compromised by a key/API exposure from a third party vendor, for example, and now the VW incident. The pattern is someone predictable too:

  1. Initial entry - no protection to a system, social engineering or phish.
  2. Access to a sensitive process somewhere in the infrastructure
  3. Memory extraction or exposure of code, creds or keys with sensitive data pulled out to exploit
  4. Lateral movement, data decryption, or data theft by exploits of creds with high levels of access.
  5. Exfiltration or integrity compromise.

Sensitive Data extraction from working apps and code isn’t as hard as you'd think and it’s an easy and low complexity route for attack especially as most active processes in use have a complete treasure map of labels, data structures and code that makes it trivial to dive in quick and pull out the critically useful attack data (credentials, keys being obvious targets). In this attack here, the researchers simply used good old linux “strings” to extract credentials - no complex software required. Anyone poking around memory dumps for the first time in their own apps or third party apps might be shocked. Data-at-rest encryption and data-in-transit encryption won’t help here - once in memory in the clear, it's a dump, attack, admin gcore, VM image, or crash away from theft.

These risks illustrate precisely why sensitive code and data should, and must, use isolation techniques that prevent root, admin and unauthorized access as a matter of priority. By using hardware-assisted code and data isolation during operation, and the hardware roots of trust to firmly anchor real zero trust, myriad attack vectors and simply closed off. The issue of credential exposure risk is taken care of with a hardware based identity approach not unlike a mult-factor biometric but for applications, along with removal of secret zero credential theft challenge that’s even present in the traditional CI/CD pipeline as outlined here in a previous post.

This is also why Confidential Computing exists and must be on the radar of all CISOs today. It’s already recognised as a strong control by regulators like the Data Protection Authorities in the EU - for example the ICO in the UK describes the technology with reference to AWS, and NIST has a series of documents outlining its properties for defending critical workloads.

In fact, all the clouds have Confidential Computing capabilities today, including Azure and GCP. There really isn’t an excuse not to use it, especially with enabling technologies like Anjuna Seaglass which makes it so simple, or Anjuna Northstar for a point-and-click approach to confidential AI analytics on vehicle data like that mentioned here.

To use an automotive analogy, computing without a confidential approach is like driving a car without a seatbelt or brakes. You will eventually crash in time, and face the outcome. Seatbelts and brakes let you control risk, and mitigate serious crash incidents.

For VW, AWS has the tech right there with AWS Nitro Enclaves that is available to every enterprise on almost every instance type. Tapping into that to run apps is made simple by Anjuna Seaglass - in minutes without code changes or complexity. 

Interested in learning more on these kinds of attack mitigations and risk prevention controls? Contact us to schedule a demo. See how you can quickly minimize breach risk and simplify GDPR compliance for privacy of data processing for handling legally and responsibly collected data - especially for AI applications with powerful positive customers and revenue outcomes.

More like this
Get Started Free with Anjuna Seaglass

Try free for 30 days on AWS, Azure or Google Cloud, and experience the power of intrinsic cloud security.

Start Free