Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
In September, a ransomware attack on MGM properties impaired gaming systems and reportedly cost the company more than $100 million. And as IT Brew previously reported, an NCC Group report on the increase in ransomware over the summer showed an increase of 154% in July alone.
In short, ransomware threats are inflicting even more damage, and wreaking havoc faster than ever.
Case in point: Cybersecurity threat analysis firm Secureworks’s 2023 State of the Threat report found that ransomware demands have accelerated to a median of under 24 hours in 2023.
Cybercrime has spiked this year, and the report said ransomware topped Secureworks's threat analysis overview.
In a little more than 10% of surveyed attacks, the dwell time was under five hours. Two-thirds of attacks were deployed within a day, and 80% within a week. In the remaining 20%, attackers lurked for more than a week, and three-quarters of those waited for over a month.
Tracking ransomware is difficult, Secureworks cautioned. Attacks might not be made publicly available after companies pay the ransom. And some attacks may be successful but inconsequential, and never reported for that reason.
“It may not be outlandish to assume that the vast majority of ‘successful’ ransomware operations occur without the victim’s name ever reaching the leak site—that is where the incentive to pay lies, with the victim motivated to prevent public disclosure,” according to the report.
It also warned that while AI has not yet lived up to sensational fears of being used by adversaries for super hacks—thus far the technology has been primarily used for phishing—the danger is real.
“Threat actors and researchers are experimenting with the creation of malware which leverages ChatGPT functionality for defense evasion and code creation,” the report said. “However, these types of AI models base their responses to user inputs on statistical analysis of previously produced text and do not currently demonstrate the creativity and ingenuity of human coders when finding novel ways to circumvent security controls and discover new vulnerabilities.”