Kyoto University lost terabytes of data in a machine-as-an-insider scenario triggered by human error. Here are questions CISOs should ask to avoid a similar situation. Credit: Thinkstock The director for the Academic Center for Computing and Media Studies within the Institute for Information Management and Communication at Japan’s Kyoto University, Toshio Okabe, issued an apology on December 28 to users of the supercomputing systems for losing approximately 77 terabytes of user data, which comprised approximately a mere 34 million files from 14 research groups.The apology follows the advisory that users had received on December 16, which outlined how from December 14 to 16 a defect in the backup storage program of the supercomputer system supporting the /LARGE0 directory failed, and the data was deleted unintentionally.The supercomputer was identified as the Nippon Hewlett-Packard GK. Hewlett Packard Enterprise (HPE) in its own apology confirmed the data loss, accepting 100% responsibility and referenced how 49 terabytes of data remain backed up on the /LARGE1 directory. HPE went on to offer compensation to those who lost their files and provided a pathway to initiate the dialog. How the data loss occurredAs HPE explained, a machine running the backup script had been adjusted to “improve visibility and readability,” but the script was modified and was reloaded by “overwriting” the prior script. One small glitch occurred and the resulting code was “reloaded from the middle,” which caused the deletion of the /LARGE0 directory instead of the directory of files designated for deletion. To mitigate against a recurrence, HPE said it would:Fully verify code prior to applying it to the supercomputer system.Conduct an examination regarding the impact and suggest improvements to avoid a recurrence.Re-educate engineers to avoid recurrence of human error.Kyoto University put the backup process on full-stop until the end of January. Missing backup processes and procedures The university’s apology explained how restoration of deleted files wasn’t possible as there was no multigenerational backup. The basic tenets of “son-father-grandfather” backup methodologies simply did not exist. Going forward, however, incremental backups would be the norm and a full backup which mirrored the original corpus would also be created.In his apology highlighted to all users, Okabe said, “The possibility of file loss due to equipment failure or disaster [exists] so … please backup your important files to another system.”The published notice was updated on January 4, 2022, mitigating the breadth of the data loss. As it turned out, some of the files lost did not require restoration, which resulted in a new final total loss of only 8 terabytes of important data comprising 3.5 million files. The number of affected users both from within the university and beyond who used the university’s supercomputers has landed on 68 users.The takeaway for CISOsKyoto University outsourced the operation of the supercomputer center to a third party, in this instance HPE. Okabe pointedly placed the problem at the feet of HPE, while also noting there “was a problem with the operation management system at the university.” Even the existence of a single generational cold storage backup drive of 100 terabyte capacity (an investment of less than $10,000) would have been sufficient to house the prior day/week’s files and obviate the catastrophic loss of user data. Such was not the case.What happened in Kyoto was a classic case of the machines being the insider threat. In this case, nothing was stolen. Rather the information was destroyed. There is no indication this instance was nothing more than human error. Multiple groups had their research affected, a setback no doubt. Though human error was at play, the result is identical had the code had been purposefully adjusted to fail by an unscrupulous individual, with the goal of destroying the work of a targeted entity within the affected groups. Questions CISOs should be asking of their teams include: How many third-party vendors does our infrastructure rely on to keep our system and data up, secure, and accessible?What is the visibility into the third-party’s processes and procedures?How are script modifications crafted, introduced, and verified?What is the level of visibility into the third-party employees/contractors responsible for supporting our instance? Related content news Iranian hackers harvest credentials through advanced social engineering campaigns Mandiant observed several malicious campaigns with threat actors impersonating journalists and harvesting the victim’s cloud environment credentials. By Shweta Sharma May 02, 2024 4 mins Hacker Groups Social Engineering news Dropbox Sign hack exposed user data, raises security concerns for e-sign industry The names and email addresses of those customers were also exposed who had never created an account with Dropbox Sign but had “received or signed a document through Dropbox Sign.” By Gyana Swain May 02, 2024 5 mins Data Breach news UnitedHealth hack may impact a third of US citizens: CEO testimony Despite paying a $22 million ransom in Bitcoin to regain access to encrypted files, the company cannot confirm whether copies of the data were made or published online. By Prasanth Aby Thomas May 02, 2024 4 mins Data Breach Ransomware Hacking news Most interesting products to see at RSAC 2024 Tools, platforms, and services that the CSO team recommends 2024 RSA Conference attendees check out. By CSO Staff May 02, 2024 6 mins RSA Conference Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe