AI Safety

Monthly Roundup #41: April 2025

Signal messages exposed, Lyft fraud unchecked, and open-plan offices slammed.

Deep Dive

Zvi's monthly roundup for April 2025 covers a range of pressing tech and policy issues. A key privacy concern: Apple stores Signal messages in an iOS notification database, allowing authorities to extract them. Signal has asked Apple to fix this, and users can mitigate it by setting notifications to show no name or content. Meanwhile, Lyft's fraud detection is criticized as alarmingly poor—a user reported a 3-minute ride from SFO to SF that was never taken, yet Lyft sided with the driver. The suggestion is to leverage customer reputation for claims, treating high-frequency users' reports as default-true.

Other highlights include the gambling industry's problematic marketing to self-excluded individuals, and a broader discussion on open-plan offices. Amanda Askell notes that tech companies pay millions for employees but then hinder productivity with open plans, suggesting offering private offices as a poaching strategy. OpenAI's OpSec is also questioned, with the observation that even Manhattan Project had spies, so secrecy is challenging. The roundup underscores the tension between public coordination and private security in AI development.

Key Points
  • Apple stores Signal messages in iOS notification database, accessible to authorities; Signal urges fix.
  • Lyft charged a user for a 3-minute SFO-to-SF ride that never occurred, highlighting poor fraud detection.
  • Open-plan offices in tech are criticized for hindering productivity; private offices suggested as a hiring lure.

Why It Matters

Highlights critical privacy, fraud, and workplace issues affecting tech professionals daily.