Team Wabbi
July 24, 2025
This article originally appeared on Forbes on July 23, 2025
Expert Panel® Forbes Councils Member
Forbes Technology Council COUNCIL POST| Membership (Fee-Based)

getty
As artificial intelligence advances, so do the tactics of malicious actors. Hackers are now using AI to scale attacks, exploit vulnerabilities more quickly and create deceptive content that’s nearly undetectable with traditional defenses.
From deepfakes and synthetic identities to AI-generated malware and real-time phishing schemes, the threat landscape is evolving fast. Below, members of Forbes Technology Council share new ways hackers are weaponizing AI, along with practical strategies for defending against these risks.
1. Targeting Real-Time Payments
Fraudsters are using AI to create sophisticated fake communications and synthetic identities that target real-time payments at an unprecedented scale; 75% of financial institutions admit bad actors leverage AI more effectively than they do, exposing vulnerabilities. There’s no silver bullet, but organizations must leverage AI better across the full customer lifecycle, not just for identity verification. – Yinglian Xie, DataVisor
2. Committing Large-Scale, Humanlike Fraud
Hackers now use agentic AI to create fake accounts and commit fraud with humanlike precision at a massive scale. These AI agents mimic real user behavior, bypassing traditional defenses. To stay protected, organizations must move beyond CAPTCHAs and invest in advanced detection systems that analyze behavior, device intelligence and interaction patterns in real time. – Dan Pinto, Fingerprint
3. Analyzing Code For Vulnerabilities
Hackers are weaponizing AI to rapidly analyze large volumes of code and uncover vulnerabilities—often before they’re even detected. By shifting security left and right, organizations can operationalize security continuously throughout the software development lifecycle to prevent weak or misconfigured security controls, closing gaps from code to deployment that make them vulnerable to AI-powered adversaries. – Brittany Greenfield, Wabbi
4. Infiltrating Hiring Processes
Hackers use AI to deploy deepfakes that impersonate job candidates during virtual interviews and hiring assessments, allowing them to bypass traditional identity checks and gain insider access. To defend against this, organizations should adopt biometric authentication and certified identity verification, ensuring both incorporate liveness detection and presentation attack defenses. – Michael Engle, 1Kosmos
5. Building Custom Malware
Threat actors are increasingly using unregulated black market LLMs to create malware that can bypass traditional defenses. Once inside, they move laterally to attack. Instant detection of unusual network activity is critical to stop them. Security teams should develop a baseline of normal behavior and continuously monitor their networks to identify anomalies for rapid investigation and remediation. – Rob Greer, ExtraHop
6. Rapidly Exploiting Security Flaws
With AI, the time needed to exploit security flaws (vulnerabilities) is drastically reduced from months to days—perhaps, in the near future, even minutes. We already see machines (AI agents) automatically researching, developing and exploiting machines in the wild. What organizations can do is to adopt advanced security controls that can remediate or mitigate threats in a faster, more efficient way. – Roi Cohen, Vicarius
7. Supercharging Attacks
AI has become a performance enhancer for threat actors, helping them execute stronger malware attacks, more realistic phishing scams and sophisticated social engineering rackets. The best defense against these AI-enhanced offensives is to embrace zero trust, in which policies and controls are made to contain threats moving at machine speed by limiting lateral movement and enforcing least privilege. – Thyaga Vasudevan, Skyhigh Security
8. Using Deepfakes For Extortion And Deception
Hackers are using AI to create deepfakes to extort execs and voice signatures to trick employees. Companies need to leverage multiple policies and processes, including zero-trust principles, multifactor authentication, scalable monitoring tools and continuous employee education and awareness to guard against expanding AI threats. – Rob Green, Insight Enterprises
9. Finding Logic Flaws In Custom Apps And APIs
Hackers are using AI to find logic flaws in custom apps and APIs. AI models predict weak points by simulating inputs and analyzing code, uncovering exploits faster than manual scans. Defend with AI-powered code review tools, runtime anomaly detection and red team simulations. For consumers: Reduce app connections and use smart identity monitoring. – Saby Waraich, Clackamas Community College
10. Creating Constantly Morphing Malware
Hackers are using AI to create malware that constantly changes to avoid detection, making old antivirus tools useless. To fight back, companies need AI-powered security that watches behavior instead of just known threats. It’s about catching suspicious actions in real time and assuming nothing is safe by default—because AI-driven attacks move fast. – Haider Ali, WebFoundr
11. Launching Next-Gen Botnet Attacks
Hackers are using AI to quickly develop new botnet propagation and control mechanisms to create bigger, more versatile botnets. An example is the Aisuru botnet, which has been in the news for launching record-breaking distributed denial-of-service attacks. As these new botnets emerge, organizations that have internet-facing apps should reevaluate their DDoS defenses to ensure they are evolving along with the threat. – Carlos Morales, Vercara, a DigiCert Company
12. Lowering Attack Barriers With No-Code Tools
The onset of AI-powered no-code tools and “vibe coding” platforms has lowered the technical barrier for bad actors to launch sophisticated attacks; however, the core tactics remain rooted in social engineering and phishing. The best defense is continuous training, reinforced by phishing simulation tests. Only by developing intuitive awareness of attack vectors can we build lasting workforce resilience. – Pawel Rzeszucinski, Webpros
13. Duplicating Writing Styles
Hackers now use AI to create customized phishing emails that duplicate writing patterns and contextual elements to evade detection systems. Organizations need to purchase AI-based threat detection systems and teach staff members to identify minor warning signs, because security awareness has evolved into a human-AI collaboration. – Raju Dandigam, Navan
14. Scanning Open-Source Code For Zero-Day Vulnerabilities
Hackers now use AI to scan open-source code and binaries at scale, rapidly uncovering zero-day vulnerabilities in real time. To defend, shift security left by using AI-powered code analysis in CI/CD, continuously scanning for risks across systems, and integrating threat intelligence. As AI accelerates attacks, organizations must respond with early detection, automation and proactive patching. – Harikrishnan Muthukrishnan, Florida Blue
15. Chaining Zero-Day Exploits
Hackers now use AI to autonomously chain together zero-day exploits—mapping incomplete vulnerabilities across multiple systems and executing coordinated breaches. To defend, organizations must implement AI-led threat modeling that simulates cross-domain attack paths, enabling preemptive patching even when individual flaws seem harmless in isolation. – Jagadish Gokavarapu, Wissen Infotech
16. Spreading Malware Through Fake GitHub Repositories
As seen in a recent case, threat actors use AI to create fake GitHub repositories, misleading developers into downloading malware. Organizations can protect themselves by reviewing open-source code, deploying AI-driven analytics, educating employees on risks, and implementing multifactor authentication and regular patch management. – Arpna Aggarwal
17. Mimicking Trusted Behavior
Malicious attackers aren’t just using AI to break systems—they’re also using it to blend in. When threats mimic trusted behavior, traditional detection falls short. AI can help defenders learn what “normal” looks like and spot what seems familiar but doesn’t match expected patterns. The goal isn’t to flag the unusual but to catch the usual in the wrong place, at the wrong time, with the wrong badge. – Leah Dodson, Piqued Solutions
18. Cracking Password Patterns
By combining AI and leaked passwords, hackers can now find hidden patterns in our brains. As humans, we are terrible at creating and remembering random passwords; instead, we rely on patterns. Yet, AI can quickly discover these patterns and replay them to find passwords for other services. – Kevin Korte, Univention
19. Causing GenAI Systems To Produce Undesirable Responses
One of the new ways hackers are using AI is through “prompt injection” and “output injection” on generative AI systems. This results in undesirable responses from enterprise systems—and the outputs produced by these generative AI solutions are critical for end users. Organizations and consumers must refer to the OWASP top 10 list of AI risks and refer to the recommended strategies to mitigate them. – Sid Dixit, CopperPoint Insurance
20. Evolving Ransomware With Adaptive Encryption
AI-powered ransomware now adapts encryption methods in real time, making traditional backup strategies less effective. Organizations should implement immutable backups that are stored offline and test recovery procedures monthly to stay ahead of evolving threats. – Chongwei Chen, DataNumen, Inc.
“