A developer error caused the leak of 150,000 to 200,000 patient health records stored in productivity apps from Microsoft and Google that were recently found on GitHub.
According to Threatpost, Dutch researcher Jelle Ursem discovered nine separate files of highly sensitive personal health information (PHI) from apps such as Office 365 and Google G Suite from nine separate health organizations. He had difficulty reaching the companies whose data had been leaked and so eventually reported the breach to DataBreaches.net, which worked with him to publish a collaborative paper.
Among the errors developers made included: Embedding hard-coded login credentials in code instead of making them a configuration option on the server the code runs on; using public repositories instead of private repositories; failing to use two-factor or multifactor authentication for email accounts; and/or abandoning repositories instead of deleting them when no longer needed, they wrote, writes Threatpost.
Matt Walmsley, EMEA Director at Vectra, a San Jose, Calif.-based provider of technology which applies AI to detect and hunt for cyber attackers, notes that, “Administrations that are failing to take basic steps to secure cloud services or apps isn’t a new story – there has been so many instances that have come to light where private data was inadvertently left exposed to the internet. While cloud computing’s instant provisioning and scale are valuable benefits, the cloud service provider’s features and default configurations are constantly in flux. Therefore, administrators must know and adapt what they’re doing and ensure appropriate access controls are in place to protect their data. As no system or person is ever perfect, the ability to monitor, detect and respond to unauthorized or malicious access to cloud services can make the difference between a contained security incident and a full-blown breach like these healthcare providers and their patients now face.”