One of summer’s top information security events, Black Hat 2024, concluded on August 8. The conference comes at a time when many democracies are going for elections and heightened geopolitical turmoil and cyber threats stand to threaten them. It also comes at a time when artificial intelligence (AI), especially generative AI (GenAI), is affecting everything, from politics to sports and business, and the security threat landscape continues to evolve.
As such, the conference saw several product and solution vendors and sessions focusing on artificial intelligence and what measures governments and businesses are taking to improve their security posture. And how can one ignore the security implications of the recent Microsoft/CrowdStrike outage? At the conference, we witnessed several conversations about Microsoft updates and patches.
Just like last year’s event, the first four days of the infosec conference were filled with training sessions on ransomware response, space system security, identifying bugs and behavioral threats, Active Directory security, cloud incident response, and much more. Thousands of infosec and cybersecurity professionals, from beginners to pros, participated in these training sessions. Over the last two days, two keynote speakers and several security experts took the main stage.
All that said, here are the key takeaways Spiceworks News & Insights identified.
Moxie Marlinspike, founder of Signal, and Jeff Moss, founder of Black Hat, delved into critical topics shaping the future of privacy
Moss and Marlinspike sat down for a fireside chat and delved into the critical topics shaping the future of privacy. Using their real-world experiences, they examined the complex tradeoffs between privacy and security and discussed examples of navigating these tradeoffs.
Moss and Marlinspike further discussed why safeguarding personal information should be a top priority for businesses and developers and cyber leaders’ responsibilities in this mission. Additionally, their conversation explored the critical role of privacy in enabling social change.
Danny Jenkins, CEO and co-founder of ThreatLocker, spoke on understanding and reducing supply chain and software vulnerability risks
As technologies advance and more technologies, tools, and solutions enter the market, the software ecosystem has become more complicated. In such a complex ecosystem, individual application risks are compounded. When it comes to reducing supply chain risks, identifying unintended vulnerabilities or backdoors that can be exploited in an organization’s environment is as critical as staying updated with the latest hacking intel.
Jenkins spoke in detail about how organizations can identify and reduce the risk to their software environment and prevent disruptions to their organizations.
See more: Emerging Threats and Countermeasures: Black Hat 2023 in Review
With the AI industry’s value projected to grow over 13x in the next six years and hundreds of AI-powered products and solutions coming out every other day, it may not be surprising that AI took center stage at the conference. Many vendor booths prominently advertised terms like LLM, AI, and GenAI. Many sessions also focused on the role of AI in security, especially the risks and rewards.
For example, NVIDIA covered the top threats to large language models (LLMs). According to NVIDIA AI Red Team’s findings, one of the most challenging attacks is indirect prompt injections, in which an LLM reads and responds to an instruction from a third-party source. The second major pain point involves plugins, which may not be built securely. Attackers can potentially exploit them to get downstream access to the model itself.
To address and fortify against these problems, Richard Harang, principal AI and ML security architect at NVIDIA, recommended good old-fashioned application security. This includes restricting users’ access permissions. For plugins, organizations should harden them to the point that they would be comfortable exposing them to the internet.
While GenAI and LLMs have proven controversial, experts believe the technology has practical security uses, such as making technical data more readable for humans and analyzing large amounts of threat intelligence. That said, Chuck Herrin, field CTO of API security for f5, told TechTarget that discernment between practical use cases and gimmicky ones would be a more important discussion moving forward. Further, David Kennedy, founder and CEO of TrustedSec, said that AI has not provided much innovation in the security industry; many companies claim to use AI but don’t. He also said that many companies integrate the technology without considering what their AI product actually does.
Security researchers also warned that AI attacks are growing and may eventually become lethal. For example, cybersecurity company HiddenLayer recently released the AI Threat Landscape report. According to the report, threat actors know that businesses are increasingly relying on artificial intelligence and are working to exploit it. Chloé Messdaghi, head of threat intelligence at HiddenLayer, said that attackers have already created various methods to use AI for nefarious purposes. A few methods include data poisoning, model theft, and model evasion. A few threat actors may also use code injection, prompt injection attacks, or supply chain attacks.
Messdaghi predicts a sharp increase in hostile attacks directed at AI as more businesses leverage AI models. In the meantime, businesses should adjust to the changing threat landscape if they have to safeguard end users and customers.
Since the Microsoft/CrowdStrike global outage and Microsoft’s Azure outage within a few days made headlines, experts haven’t been able to stop discussing the security implications. As expected, Microsoft outages and even its patches and updates became discussion topics at the infosec conference.
SafeBreach security researcher Alon Leviev showed that threat actors could exploit zero-days in downgrade attacks to unpatch entirely updated Windows, 10, 11, and Server systems and reintroduce older vulnerabilities. He discovered that the Windows update process can be compromised to downgrade critical OS components, including DLLs and NT Kernel. Despite these components being outdated, when checking with Windows Update, the OS would report that it was fully updated. The issues wouldn’t be detected by scanning and recovery tools. Endpoint detection and response (EDR) solutions would also not block the downgrade attack.
By exploiting the zero-day vulnerabilities, Leviev could also downgrade Credential Guard’s Secure Kernel and Isolated User Mode Process and Hyper-V’s hypervisor. With these, he could expose hundreds of past vulnerabilities that were patched fair game once more.
Microsoft, too, issued advisories on two unpatched zero-days—CVE-2024-38202 and CVE-2024-21302—in coordination with the Black Hat talk, giving mitigation advice until a fix was released.
Many former employees and outside researchers have complained that the company patches only those vulnerabilities that friendly researchers point out instead of redesigning programs to eliminate entire classes of attacks. The company is also under fire for other security failings that have allowed spies to hijack the email accounts of top US officials. As such, Microsoft has pledged to make security performance a part of salary reviews this year.
See more: Top Tech Conferences in August 2024
While there were several product and feature launches during the conference, here are a few worth mentioning.