The Download: supercharged scams and studying AI healthcare
AI scams surge as LLMs enable faster, cheaper attacks; healthcare AI accuracy questioned.
AI is reshaping cybercrime at an alarming pace. Since ChatGPT's 2022 debut, cybercriminals have leveraged generative AI for human-like phishing emails, deepfakes, and automated vulnerability scans. The volume of attacks is overwhelming organizations, as AI makes scams faster, cheaper, and more accessible. This trend is set to worsen as tools improve and adoption spreads, marking a new era of supercharged scams that demand urgent defensive innovation.
In healthcare, AI tools are widely used for notetaking, flagging patients, and interpreting X-rays, with studies showing high accuracy. However, a critical gap remains: there's no solid evidence that these tools actually improve patient outcomes. This disconnect raises questions about efficacy and resource allocation. Elsewhere, DeepSeek-V4 rivals top closed-source models, OpenAI's GPT-5.5 rolls out widely despite security concerns, Meta cuts 10% of jobs to fund AI, and Norway enforces social media age restrictions.
- Cybercriminals use LLMs for phishing, deepfakes, and automated scans, overwhelming defenses with faster, cheaper attacks.
- Healthcare AI tools show high accuracy in interpreting X-rays and flagging patients, but lack proof of improving patient outcomes.
- DeepSeek-V4 launches as a top open-source model; OpenAI's GPT-5.5 releases widely; Meta plans 8,000 layoffs to fund AI.
Why It Matters
AI's dual-use nature demands urgent cybersecurity upgrades and rigorous healthcare outcome validation for real-world impact.