Image & Video

These days, is it rude to ask in an announcement thread if new code/node/app was vibecoded? Or if the owner has any coding experience?

A viral Reddit thread asks if it's rude to question whether new AI tools were coded by LLMs like Claude or GPT.

Deep Dive

A viral discussion on the r/ComfyUI subreddit is grappling with a new etiquette dilemma in the age of AI-assisted development: is it rude to ask if a newly shared tool was 'vibecoded'? The term refers to code generated entirely by large language models (LLMs) like Anthropic's Claude, OpenAI's GPT, or Google's Gemini. The original poster, a member of the AI workflow community, expressed hesitation in asking developers directly about their coding methods, citing past experiences with downvotes and snarky replies. This reluctance stems from a respect for traditional coders while acknowledging that many LLM-generated nodes now perform functions beyond what skilled developers previously engineered.

The core tension lies between innovation and reliability. Over the past six months, the barrier to creating ComfyUI nodes—custom components for the popular AI image generation workflow tool—has collapsed. Users can now describe a function to an LLM and receive working code. However, this raises practical concerns for adopters: Will an AI-coded node corrupt my Python virtual environment (venv) with conflicting dependencies? Will the developer, who may lack deep programming knowledge, provide updates or fix bugs? The community is split, with some viewing the question as a necessary quality check and others seeing it as dismissive of new, accessible forms of creation. A linked thread highlights how tools like Claude have made node creation 'very fun/easy,' democratizing development but also flooding the ecosystem with tools of uncertain provenance and longevity.

Key Points
  • The term 'vibecoded' refers to software created entirely by prompting LLMs like Claude or GPT, bypassing traditional coding expertise.
  • Community concerns focus on dependency management, bug potential, and long-term maintenance for AI-generated tools in critical workflows.
  • The debate highlights a growing divide between celebrating democratized AI tool creation and ensuring system stability for end-users.

Why It Matters

As AI-generated code becomes mainstream, professionals must navigate new risks in software dependency and maintenance without stifling innovation.