ByteDance's invisible watermark on Seedance 2.0 is security theater. Change my mind.
The invisible watermark vanishes if content is re-uploaded, and US users can't access the feature.
ByteDance has launched an invisible watermark feature for its Seedance 2.0 AI model, a move critics are labeling as ineffective 'security theater.' The primary flaw is that the watermark is not persistent; it disappears entirely if the AI-generated content is downloaded and then re-uploaded to a platform. This makes it trivial for bad actors to strip the identifying marker. Furthermore, the feature is geographically restricted and is notably unavailable to users in the United States, reportedly because ByteDance's own legal team did not approve its rollout there, hinting at unresolved regulatory risks.
The update does nothing to address the larger controversies surrounding Seedance 2.0. ByteDance continues to withhold crucial information about the dataset used to train the model, leaving unanswered questions about potential copyright infringement and data sourcing. The company's response—a fragile watermark and a delayed statement—has done little to satisfy critics or Hollywood studios, who previously sent warning letters. The situation highlights the ongoing tension between rapid AI deployment and the need for robust, transparent safeguards for content provenance and intellectual property.
- The invisible watermark on Seedance 2.0 content is removed by simply re-uploading the file.
- The watermarking feature is blocked for US users, unapproved by ByteDance's legal team.
- ByteDance still has not disclosed the training data used for the AI model, a major point of contention.
Why It Matters
It shows how superficial AI safety features can fail to address real concerns about copyright, transparency, and content provenance.