A small lab studying
AI-generated music.
We're building a detector that tells Suno, Udio and ElevenLabs tracks from real human performances. The work-in-progress is public, the limits are labelled, and the iOS + Android apps let you try what we have today.
How we work
AI-music detection is an open research problem. We don’t pretend otherwise — instead, we split the task into two pillars and are fully transparent about which one is answering you.
Platform identification
Links from Suno, Udio, ElevenLabs, AIVA, Soundraw, Mubert, Boomy, Loudly or Beatoven resolve to an instant 100% verdict — those sites only host AI-generated music, so no inference is needed.
Audio-signal model
Apple Music tracks and mic scans go through our audio classifier. The current research model is speech-trained and still biased — a music-native MERT replacement is in active training.
Stem-level analysis
For ambiguous verdicts we’ll split audio into vocals / drums / bass / other and analyse each stream separately. Roadmap after v2 ships.
Current state
Transparency note: the current audio model frequently labels real music as AI. We surface that in the app and on every ambiguous verdict. The v2 model in training targets ≥ 85% accuracy across Suno, Udio and ElevenLabs.
What we promise you
Honest about every result
Every verdict shows how sure we are and why — whether we’re certain (link from an AI-only platform) or still learning (mic or Apple Music). No “99.9% accurate” marketing we can’t back.
Your audio stays yours
Songs you scan are analysed and deleted immediately. Your scan history lives only on your phone — never uploaded. Read the full privacy policy for the specifics.
The research is public
Every model version, every accuracy number, every limit — we publish it in The Lab. If something changes, so does the page. No vapourware.
Contact
Support, feedback, press, research collaboration — msquaregiza@gmail.com.