AI Slop Inside

The Sladge

Responsible AI use starts with transparency.

A self-declaration badge for projects with heavy LLM use. Because shipping unchecked AI slop without warning is plain rude.


The problem

LLM-generated code looks plausible, especially to inexperienced developers. However, it routinely contains subtle bugs, unchecked assumptions, and confident-sounding nonsense. A human implementing their own design catches problems during implementation. An LLM skips that feedback loop entirely.

The result is code that seems to solve the problem, but is buggy and unmaintainable due to its low quality nature. Without declaring slop, you risk other humans finding out how bad the code is the hard way when they take it upon themselves to review or maintain it. In a shared codebase, that someone is your colleague. In open source, it's a stranger who trusted your contribution.


Two honest paths

In the age of responsible AI use, you have exactly two ethical options:

100% Vibe Code

Go full AI. Ship the slop. Few will judge you for vibe coding a quick PoC. But warn people. Slap on the sladge and let readers know: "here be dragons - you are probably best off maintaining it using an LLM. Some work may be required before this is production ready.". Honest and upfront - lets others know what they are dealing with.

Personal Responsibility

Use LLMs where you see fit: autocomplete, drafts, boilerplate, but you are personally responsible for every line that ships. Just like with code you wrote yourself, you review it yourself before lending an eye from others. You test it. You own it. No badge needed.

What's not an option: sharing LLM output without review and presenting it as your own work. That's not "using AI" - that's offloading your responsibility onto others. Consider that slop PRs at organizations with a preference for code quality may simply turn into a distributed prompt writing exercise. If meetings are a time waste multiplier, so is 5 people gradually requesting changes on an LLM-generated PR. Many do not trust LLM-generated code, and being upfront about having used an LLM may ironically make these people trust you more because you show critical thinking.


Use the badge

Add the sladge to your README, repo, or project page to honestly signal heavy LLM involvement.

AI Slop Inside
markdown
<!-- Add to your README --> [![AI Slop Inside](https://sladge.net/badge.svg)](https://sladge.net)
html
<a href="https://sladge.net"> <img src="https://sladge.net/badge.svg" alt="AI Slop Inside"> </a>

The point

This isn't anti-AI. LLMs are useful tools. But tools require responsibility from the person wielding them.

If you let an LLM write your code and you didn't verify it, you didn't "use AI to be more productive." You generated technical debt and passed the cost of cleaning it up to someone else.

The sladge exists so that "someone else" at least gets a heads up.


FAQ

Is this serious?

Semi-serious. The problem is real, and the tone is deliberately blunt. If it makes one person think twice before submitting unverified slop as real work, it worked.

Why was this site made?

Seeing people starting to consider making chrome plugins for marking "known AI users" made me want to contribute something that hopefully helps encourage discussions about responsible AI work in group collaboration, and motivate people to self-declare vibecoded work rather than others doing it for them.

Do I need this if I just use Copilot for autocomplete?

No. If you're reading, understanding, and testing every suggestion before it ships, you're taking personal responsibility. That's path two. The sladge is for when the AI is doing the thinking and you're just the delivery mechanism.

What counts as "heavy LLM use"?

In general, if an LLM is doing most of the actual coding and all you are doing is explaining what you want, it is heavy LLM use. The litmus test is: Are you able to explain what changes you made in a code review, without first looking at the diff?

Isn't this just shaming people?

It's the opposite. It's giving people a dignified (if slightly sarcastic) way to be upfront. Few will shame you for experimenting with AI, but trying to sell off slop as real work actively hurts your peers.

But LLMs allow me to ship my app in one day!

Yes, and you will probably also get pwned. and/or fail to do infrastructure and accidentally delete it all. It turns out vibe coded apps are often insecure, but time-to-market does matter. What matters more to you - money, or the responsibility you have to your end-users? This author believes LLMs have a strong future for developing frontends and boilerplate, but you are playing with fire if you trust vibecoded code to protect your business - especially if you don't review it.

This site doesn't address the copyright issues of using LLMs to generate code

This is a fight to be fought, although it sadly seems it is a lost cause. Copyright simply doesn't matter if infringing on it is beneficial to powerful entities, and it sucks. However, in order to make a clear point, this site focuses specifically on the responsibility of not having your peers unknowingly deal with vibe coded code.

When I am honest about LLM use people just ignore my "work"

You are probably dealing with someone who either wants nothing to do with LLM generated code, or sees your admission of LLM use as the explanation for why they already didn't want to touch your work. You are better off ignoring them for now and working on improving your own developer skills. Try again later. Even Microsoft - currently under fire for allegedly overusing LLMs - acknowledges it takes skill and seniority to use LLMs properly. If you are unable to understand why peers dislike your LLM code, you aren't necessarily dumb or bad, but your programming skill may need more work.


Read more