Open Source

Got OpenAI's privacy filter model running on-device via ExecuTorch

Privacy filtering runs locally on mobile with OpenAI's model and ExecuTorch.

Deep Dive

A developer has successfully deployed OpenAI's privacy filter model on mobile devices via ExecuTorch, Meta's runtime for on-device machine learning. The setup uses a react-native-executorch bridge and runs with a memory footprint of approximately 600MB RAM. The model handles arbitrary text inputs—including emails, documents, chat logs, pasted notes, and transcripts—and flags sensitive content with surprising accuracy, catching PII and other sensitive material that goes beyond simple pattern matching.

This approach addresses a fundamental paradox in privacy filtering: sending text to a cloud API to check if it's sensitive undermines the very privacy it aims to protect. By running the model locally, the developer ensures that drafts, internal documents, exported chat history, and OCR'd scans never leave the device. This aligns the privacy guarantee with the actual use case, making it particularly valuable for users who are reluctant to send sensitive data to external servers. The implementation demonstrates that on-device AI can handle complex tasks like privacy filtering without sacrificing accuracy or performance.

Key Points
  • OpenAI's privacy filter model runs on mobile via ExecuTorch with a 600MB RAM footprint
  • Uses react-native-executorch bridge for integration with mobile apps
  • Handles arbitrary text inputs (emails, documents, chat logs) and catches PII and sensitive content effectively

Why It Matters

On-device privacy filtering aligns security guarantees with use cases, eliminating cloud API paradox for sensitive data.