Kyoto University lost terabytes of data in a machine-as-an-insider scenario triggered by human error. Here are questions CISOs should ask to avoid a similar situation. Credit: Thinkstock The director for the Academic Center for Computing and Media Studies within the Institute for Information Management and Communication at Japan’s Kyoto University, Toshio Okabe, issued an apology on December 28 to users of the supercomputing systems for losing approximately 77 terabytes of user data, which comprised approximately a mere 34 million files from 14 research groups.The apology follows the advisory that users had received on December 16, which outlined how from December 14 to 16 a defect in the backup storage program of the supercomputer system supporting the /LARGE0 directory failed, and the data was deleted unintentionally.The supercomputer was identified as the Nippon Hewlett-Packard GK. Hewlett Packard Enterprise (HPE) in its own apology confirmed the data loss, accepting 100% responsibility and referenced how 49 terabytes of data remain backed up on the /LARGE1 directory. HPE went on to offer compensation to those who lost their files and provided a pathway to initiate the dialog. How the data loss occurredAs HPE explained, a machine running the backup script had been adjusted to “improve visibility and readability,” but the script was modified and was reloaded by “overwriting” the prior script. One small glitch occurred and the resulting code was “reloaded from the middle,” which caused the deletion of the /LARGE0 directory instead of the directory of files designated for deletion. To mitigate against a recurrence, HPE said it would:Fully verify code prior to applying it to the supercomputer system.Conduct an examination regarding the impact and suggest improvements to avoid a recurrence.Re-educate engineers to avoid recurrence of human error.Kyoto University put the backup process on full-stop until the end of January. Missing backup processes and procedures The university’s apology explained how restoration of deleted files wasn’t possible as there was no multigenerational backup. The basic tenets of “son-father-grandfather” backup methodologies simply did not exist. Going forward, however, incremental backups would be the norm and a full backup which mirrored the original corpus would also be created.In his apology highlighted to all users, Okabe said, “The possibility of file loss due to equipment failure or disaster [exists] so … please backup your important files to another system.”The published notice was updated on January 4, 2022, mitigating the breadth of the data loss. As it turned out, some of the files lost did not require restoration, which resulted in a new final total loss of only 8 terabytes of important data comprising 3.5 million files. The number of affected users both from within the university and beyond who used the university’s supercomputers has landed on 68 users.The takeaway for CISOsKyoto University outsourced the operation of the supercomputer center to a third party, in this instance HPE. Okabe pointedly placed the problem at the feet of HPE, while also noting there “was a problem with the operation management system at the university.” Even the existence of a single generational cold storage backup drive of 100 terabyte capacity (an investment of less than $10,000) would have been sufficient to house the prior day/week’s files and obviate the catastrophic loss of user data. Such was not the case.What happened in Kyoto was a classic case of the machines being the insider threat. In this case, nothing was stolen. Rather the information was destroyed. There is no indication this instance was nothing more than human error. Multiple groups had their research affected, a setback no doubt. Though human error was at play, the result is identical had the code had been purposefully adjusted to fail by an unscrupulous individual, with the goal of destroying the work of a targeted entity within the affected groups. Questions CISOs should be asking of their teams include: How many third-party vendors does our infrastructure rely on to keep our system and data up, secure, and accessible?What is the visibility into the third-party’s processes and procedures?How are script modifications crafted, introduced, and verified?What is the level of visibility into the third-party employees/contractors responsible for supporting our instance? Related content news NIST publishes new guides on AI risk for developers and CISOs Companion publications to NIST’s AI Risk Management Framework explore a long worry list in more detail and are likely to become essential reading for security professionals. By John Dunn May 01, 2024 4 mins Regulation Government Security Practices news analysis 5 key takeways from Verizon's 2024 Data Breach Investigations Report The rapid of exploitation of zero-day vulnerabilities, such as MOVEit, and the effectiveness of ransomware attacks are two of the major findings from last year’s breach data. By Rosalyn Page May 01, 2024 5 mins Data Breach Zero-day vulnerability Data and Information Security feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff May 01, 2024 15 mins Technology Industry IT Skills Events feature 3 Windows vulnerabilities that may not be worth patching Some vulnerabilities eat up a security team’s time and resources yet provide little or nothing in the way of true protection. Some may even introduce more risk to a network. By Susan Bradley May 01, 2024 7 mins Windows Security Patch Management Software Security Practices PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe