CleanɆrⱤoⱤ

The AI in the Room: Making Music in the Age of Machines

(a semi-sarcastic blog post with real consequences)

Remember when Artificial Intelligence was just a Warp Records comp and not the thing getting you kicked off a label?

Yeah. Me too.

Let’s talk about it.

Because we’re way past the era where AI was this mysterious, shadowy thing hovering on the edges of electronic music discourse. It’s here. It’s in your DAW. It’s in your plugins. It’s probably inside your favorite reverb chain, tweaking that pre-delay behind your back. AI isn’t just “coming”- it’s already been downloaded, updated, and tucked neatly into your Ableton sidebar while you weren’t paying attention.

This whole purity panic about AI in electronic music is hilarious when you realize we already spent decades trying to sound like machines. Remember Warp’s Artificial Intelligence compilation from 1992? Yeah, that was literally an invitation to explore “machine music for humans.” It was marketed as intelligent music made by humans emulating synthetic thought.

Fast forward to now, and suddenly it’s blasphemy if a plugin helps with your chord progression?

We were out here building fake robots with samplers, and now that real robots want to help, we’re clutching our MIDI cables like prayer beads.

Let me get this straight.
You’re mad about AI-generated music… while running 14 MIDI effects that are doing all the note math for you? You’re scared AI might create your sound… but you’ve been dragging samples into audio-to-MIDI tools for the past four years?

Let’s be honest here, especially for us in the niche:
Experimental electronic music is already one of the smallest and most misunderstood corners of the entire scene. We’re not pop stars. We’re not even ambient house influencers. We’re noise-slingers and synth-crawlers. Braindancers and patch abusers.

So when one part of that small world decides another part isn’t “pure” enough? It hurts more. It divides an already fragile ecosystem.

And over what?
A drum generator?
A smart EQ?

Meanwhile, we’re all still uploading 20-minute glitch collages in mono and calling it “post-human electro-ritual.”

Let’s not pretend this is about ethics. This is about taste. And taste, as always, is subjective… until it’s being policed by dudes who think generative music is only valid if it’s coded in SuperCollider on a Linux laptop with no GUI.

My process isn’t always traditional, but it’s always mine.

If you’ve been following the blog, you know Clean Error got hit with some AI-related backlash – label split, accusations, drama, etc. I won’t retell it here (read this post if you want the unfiltered version), but let’s just say: using tools that may involve machine learning does not mean I’m replacing myself with a push-button “make glitch” bot.

And if your issue is with the possibility that something might have been AI-assisted… well, congrats, your favorite VST synth probably already uses machine learning under the hood. Are you gonna cancel Serum too? Should we throw Max for Live in the trash because someone used a probability sequencer?

People love to talk about “honest music” while running 13 convolution reverbs and a dozen modulated automation lanes. You’re already collaborating with machines. You’ve been doing it for years. The only difference now is that the machine has a name and that name is AI, and you’re scared it might outproduce you.

Same.

Let’s not sugarcoat it, there’s something deeply unsettling about watching a tool you once considered a gimmick evolve into a legitimate collaborator.

What was once “that weird new plugin” is now composing convincing piano phrases, suggesting harmonic changes, and layering drum fills that are just off enough to be interesting. And maybe that’s what scares people most: AI is no longer just fast – it’s starting to be… creative. Or at least convincingly performative of creativity.

That’s the part I wrestle with.

Not because I think AI is going to “steal” my sound, but because it might eventually approximate it well enough to confuse people who don’t know the difference. And that hits close to home, especially when you’re already working in a genre – glitch, IDM, experimental electronics… that’s intentionally designed to sound mechanical, alien, or emotionless on the surface. We’ve always made music that mimics machines. Now machines are mimicking us right back.

And the scary part? They’re getting better fast. Like… eerily fast.

Give it a year—maybe two—and AI models could be cranking out glitchy, off-grid, stereo-smeared post-IDM tracks that sound like a blend of your favorite Warp-era artist and a Max for Live patch that’s been emotionally neglected for too long. And people won’t always care whether a human made it or not-because it sounds right. It ticks the boxes. It vibes. It passes the Turing Test for ears.

That’s the real comfort crisis. The realization that creativity isn’t just sacred anymore – it’s becoming accessible. And for some people, that feels threatening.

But for me? That’s kind of the thrill. That’s why I use and experiment with it as a creative tool. Not because I want to hand over authorship to a machine, but because I want to respond to it. I want to throw chaotic MIDI data into a black box and hear what breaks. I want to corrupt the AI’s logic with human inconsistency. I want to blur the line until it’s not about “who” made the sound, but why it feels like it matters.

We’ve never been in total control. The future isn’t about choosing between human and machine. It’s about coexisting inside the mistake – the glitch between intention and automation. That’s where Clean Error lives.

An up and coming artist who’s been consistently pushing thoughtful, glitch-forward production and experimental workflows is The Fellow Passenger. If you’re not already familiar, his output lands somewhere between dusty signal fragments and surgically chopped digital drift, basically, right in the pocket for anyone living in the IDM/glitch/post-techno space. Super inspiring stuff, both sonically and conceptually…this was posted about a year or 2 ago roughly…

Immediately. It’s a real-time exploration of working with AI-generated audio material inside Ableton Live 12 Suite, all tailored toward IDM and glitch production. Nothing crazy, just workflow.

Using the online AI audio generator Suno AI, The Fellow Passenger walks through how to generate raw material, chop it, recontextualize it, and breathe human intention back into it. What you get isn’t a one-click music generator – it’s raw sonic clay, meant to be manipulated and shaped into something personal. The results? Glitchy, broken, evolving in all the right ways.

What stood out to me wasn’t just the sound design (which slaps, honestly), but the mindset: AI isn’t framed as the “artist” – it’s the starting point. A spark. A palette. The human still has to do the digging. Still has to make it weird.

Huge love and respect to The Fellow Passenger—not just for putting out a thoughtful piece of content that demystifies the process, but also because his productions genuinely go off. His aesthetic sits right in that zone where melody fragments slip through distortion and everything feels like it’s been recovered from a corrupted drive, which obviously, Clean Error approves.

Plus, if you’re the type who likes to dig into project files and reverse-engineer workflows, he’s generously offering:

-The full Ableton Live 12 Suite project file
-AI-rendered sample packs from the video
-Access to all previous material via his Patreon

This video shows you an invitation to think differently about how sound gets made. To collaborate with the unpredictable. To make the machine part of your mess.

Shoutout again to The Fellow Passenger—respect for putting the work in and sharing it with the scene. We need more of this.

Let’s just set the record straight, once and for all.

Yes—Clean Error uses AI.
Kinda. Sorta. Sometimes. Maybe not the way you think.

I use smart tools. I use chaotic tools. And I’m not the only one, some artists on Clean Error and plenty of producers I know work this way too. Whether it’s Max for Live devices that evolve over time, or glitch generators from Glitchmachines, or even AI-assisted plugins from companies like Minimal Audio – we’re all leaning into tools that introduce variation, randomness, and unexpected results.

It’s not just about automation, it’s about influence.
We use sequencers that rewrite themselves. Modulators that don’t ask permission. Envelopes that fold unpredictably and sample chains that ignore the grid entirely. Some of these plugins are technically AI-assisted. Most are just cleverly designed systems built to break things in interesting ways.

None of these replace the artist. They just offer ideas. Options. Controlled chaos.

And every decision still filters through human intent: our ears, our doubt, our habits, our instincts…and yeah, sometimes our questionable caffeine-fueled 3AM patch decisions.

It’s not cheating.
It’s collaboration with the tools we chose to help us break the mold.

AI doesn’t write my tracks. It doesn’t finish my ideas. It doesn’t know my intent, and honestly, it probably wouldn’t like my drums.
But it does throw unexpected sounds into the room. It challenges form. It opens up doors I might not knock on otherwise.

I’m not outsourcing creativity. I’m designing with unpredictability.
That’s what experimental music has always been about: giving up some control in order to discover new sonic shapes. The difference now? The tools are smarter. And that’s not a problem, that’s a gift.

Let’s be honest:
You don’t cancel a delay pedal for creating space.
You don’t shame a granular synth for spitting out something you didn’t plan.
You don’t disown your favorite plugin just because it helped you stumble into a moment of beauty.

So why is it different when that plugin uses the word “AI”?

This purity test some people are applying to electronic music workflows is missing the point. All music tools are collaborators, they respond to what you put in, they change with context, they glitch when you least expect it. AI isn’t evil. It’s not a threat. It’s just one more layer of signal.

And yeah, it might get really good one day. It might even be better than us at replicating what we already do. So?

Let the machines play.
I’ll be over here resampling their mistakes into something they couldn’t imagine.

A seismic shift just hit the indie music world: Bandcamp – the indie artist sanctuary has announced a policy banning music that is “generated wholly or in substantial part by AI” from its platform. They’ve made it clear they want to

put human creativity first

and keep the vibrant, messy, imperfect art that the community has long celebrated at the center of what Bandcamp stands for. This isn’t a half‑step or a vague guideline buried in Terms of Service. Bandcamp’s team said point‑blank that music which relies heavily on generative AI will not be permitted, and users can even report music they suspect falls into that category. They reserve the right to remove any music flagged on suspicion alone.

In a world where places like Spotify and Apple Music are flinching or offering partial “AI disclosure” routes, Bandcamp’s stance feels like a bold, community‑first declaration. But for us at Clean Error, whose aesthetic inhabits the fractured, glitchy, broken, recursive, there’s a deeper philosophical question underneath:

what does it mean to call music “authentic” when it already sounds like a glitch?

AI, Autonomy, and the Aesthetics of Error

A thread floating around online posed a question that’s been hovering for a while:

“Doesn’t Clean Error already sound like AI?”
“Wasn’t the whole point to embrace brokenness, chaos, noise, glitch?”

Fair questions and yeah, that’s kind of the whole thing.

But let’s be clear: just because something sounds unfamiliar, fractured, or machine-adjacent doesn’t mean it’s coming from a soulless source. This label was never about chasing trend aesthetics or mimicking generative tools. It’s about building intentional systems of failure since 2013 – sculpting the emotional residue inside noise, recursion, and structure collapse. That’s not artificial. That’s craft.

There’s been some loud opinions from a few music artists who’ve spoken against the label or this approach directly – they know who they are. And honestly? They’re entitled to feel however they feel. But this blog isn’t here to feed drama or turn critique into content. That kind of energy doesn’t belong in here.

What does belong here is clarity: Clean Error isn’t trying to “pass” as AI.

We’re interrogating what happens when the human experience fragments, and what it sounds like when you embrace the decay instead of resisting it. That’s the sound we chase. If it confuses people? Good. It’s meant to glitch their assumptions.

Clean Error has never tried to pretend we’re some soulless machine spitting out loops with no intent. But we do embrace musical fragmentation, non‑linear structures, and recursive patterns that mirror the way AI might hallucinate. That was always intentional: not to fake “AI music,” but to explore breakdown as form, and to make error itself a language. That’s far from the shallow slap of algorithmic slop – it’s a deliberate artistic choice rooted in feeling, memory, glitch mythology, and texture.

Even if AI detection tools can’t parse intent, the intent behind Clean Error’s expression is deeply human. It’s the sound of internal states, not training sets.

But Wait … How Will Bandcamp Actually Detect AI?

Here’s where things get murky for everyone:
Bandcamp’s policy doesn’t lay out some magic detector. It’s not like they’ve unleashed a binary “AI or not” machine that scans your WAVs and spits yes/no. What they’ve actually done is tie their enforcement to reporting and review, and they’ve basically said they can remove content on suspicion.

So aside from human listeners judging it by ear, the question becomes:
Can you tell if something was made with AI?

The technology to detect AI influence is still unreliable and often can’t distinguish between intentional glitch and generative machine output. Many anti‑AI classifiers mislabel experimental or weird music as AI simply because it doesn’t look like conventional human music. That’s dangerously subjective.

Pros

• Bandcamp is defending human artistic expression and protecting real creators from being drowned under a flood of shallow AI content 🎉.
• It pushes back against impersonation and plagiarism, big problems in the AI era.
• It says artists matter more than algorithms, and that community trust is worth defending.

Cons

• “Substantial part” is vague; tools could remove music that’s legitimately human‑made but sounds unconventional.
• Experimental labels like Clean Error (or any glitch/IDM label) risk being misunderstood by moderators or algorithms because our aesthetic deliberately defies norms.
• It conflates AI creation with non‑traditional structures, which is unfair and subjective without true detection transparency.

So What Does This Mean for Clean Error?

Despite critics calling Clean Error “fake” or “unauthentic,” there’s a distinction worth shouting about: our art is not machine hallucination – it’s intention wrapped in broken patterns. The difference is subtle but critical.

AI outputs are often predictable in ways – derivative, or stitched from patterns in training data. Clean Error is about disintegration of structure on purpose. It’s the sound of what exists between intention and noise, not imitation of existing classics or algorithmically generated filler.

Even if the music sounds like AI, that’s not the same as being generated by AI. If anything, that’s the whole point of glitch as a genre: to listen for the signal in the noise. And sometimes that signal isn’t smooth or comfortable. That’s not error – that’s expression.

Will AI Tools Ever Be Good Enough to Fool the Ban?

Honestly, what if the music becomes so advanced that detection systems can’t tell the difference between good creative expression and AI‑driven output? That’s the weird paradox: tools might get too human‑sounding, and platforms will still need to judge intent.

But here’s the twist: Clean Error’s authenticity isn’t defined by detection technology. It’s defined by community, context, intent, history, mythology, and human choices made at every step.

Bandcamp’s ban highlights a key tension in 21st‑century music:
is it about how the music was made – or how it feels?

For glitch, the answer always leans toward feeling, texture, process, imperfection.

Flashbulb’s AI Detection: What the Musician Detects and How It Works

One of the most interesting counterpoints in this whole AI debate comes from Benn Jordan, aka The Flashbulb – a producer deeply steeped in IDM, glitch, breakcore, experimental sound and music tech. Jordan isn’t just an artist with a strong creative voice, he’s someone who’s tried to technically grapple with the fallout of AI scraping and generative audio.

In early 2025, Jordan released video breakdowns and experiments unveiling an algorithm he developed to detect AI‑generated music, at least in the context of platforms like Suno that generate tracks from text prompts. According to reports about his work, this method has been shown to distinguish AI‑generated tracks from human‑made ones with very high accuracy by leveraging unique digital artifacts in the audio.

So how does this detection actually work? Jordan’s idea hinges on a surprising clue:

“the way generative AI models handle compressed audio data.”


Many of the large AI music tools today were trained on massive amounts of scraped content taken from platforms where audio is stored in lossy formats (like MP3) rather than pristine, uncompressed formats. That means AI‑generated music often carries subtle signatures or patterns linked to those compression artifacts – things that most listeners can’t hear but that a classifier can pick up. Jordan’s algorithm exploits this by analyzing what’s missing or altered in the waveform in ways that hint at algorithmic generation rather than human composition.

What’s valuable about Flashbulb’s approach isn’t just the possibility of detection – it’s the reminder that real humans are thinking deeply about how music is made, what it means to be creative, and how technology intersects with expression. He’s exploring ways to reveal when music isn’t just AI‑generated but is also being monetized under false pretenses, which is exactly the kind of problem Bandcamp is trying to avoid with its policy shift.

And here’s the really meta twist – Jordan’s method doesn’t rely on some mystical human quality or emotional resonance. It’s technical and grounded in audio physics, which ironically makes the case that human‑made music and machine‑made music do leave different footprints – even if both can sound glitchy or abstract. That distinction goes straight to the heart of Clean Error’s ethos: intent and process matter. Authentic noise isn’t the same as algorithmic noise, even if they sometimes sound similar.

Grant Wilson’s Warning: Don’t Freeze the Culture

When Grant Wilson (Rephlex Records) stepped into the discussion around Bandcamp’s AI ban, he didn’t argue from a hype or panic place – he came in with precision. His response breaks open what’s actually happening here:

“Bandcamp isn’t just banning spammy AI knockoffs – they’re risking a ban on whole classes of music that couldn’t exist without experimental tools.”

And that hits very close to home for Clean Error.

Wilson draws a line between: “Derivative generation” (think AI trained to clone Aphex Twin or Burial) vs. “Instrumental computation” (think systems you build yourself, where you shape behavior and process, like an instrument)

That’s exactly the kind of sonic space Clean Error lives in. We’re not pressing buttons and letting machines go – we’re designing systems, patching behaviors, fracturing structures intentionally. This isn’t drag-and-drop “make me a glitch track.” It’s recursive failure with emotion. Audio crafted by hand inside chaos.

He points out that tools like Max for Live, algorithmic sequencers, or AI‑driven transformations aren’t replacing human creativity … they’re new instruments. Just like samplers and drum machines were once dismissed as fake, lazy, or robotic… until they became essential.

“Some music simply could not exist before now,” he says —
“Not because musicians were lazy, but because the tools were insufficient.”

That resonates deep with the Clean Error approach:
We’re not hiding behind the aesthetic of glitch. We’re using unstable systems, cracked patterns, and recursive logic to express things that traditional methods can’t capture. These are emotional messages sent through broken transmissions – not content churned out for clicks.

Wilson’s warning is clear:

This isn’t protection. It’s cultural conservatism dressed as ethics.

And if we start punishing artists for using new tools just because those tools sound “non-traditional,” we’re not defending music – we’re shrinking its future.

And just so there’s no confusion: I’m real. I run Clean Error. I write this. I make music. The label’s music is real. I build this label with actual hours, busted gear, and whatever emotional bandwidth is left after rendering 32‑bar stems through a dying GPU. This isn’t a ghost label or a content mill – it’s a community of real artists, making deeply intentional work that might sound alien, broken, or post-human… but it’s ours.

To those who hate AI …. I hear you. I really do. I understand why the flood of synthetic mimicry feels like a threat. But there’s a real difference between AI‑generated (the prompt-fed, zero‑input, churn factory stuff) and AI‑assisted, where the artist is still in full control, using tools to stretch the edge of what’s musically possible. Not all strange music is slop and honestly, a lot of it was slop long before AI showed up. Let’s call that Organic Slop™. Still human. Still weird. Just not pretending to be something it’s not.

At Clean Error, we stay rooted in practice, intent, and respect for the sound, the structure, and the people behind it. Always human. Even when it breaks. Especially when it breaks.

Leave a comment: