Vibe Coding Is Innovation. AI Writing Is Cheating. Make It Make Sense.

Colin Kaepernick is teaching kids in Prince George’s County how to use AI to tell stories. Not how to avoid it. Not how to fear it. How to use it.
In December 2025, Kaepernick launched Lumi Story AI in one of Maryland’s largest school districts. It’s a county where 66 percent of fourth grade students were not proficient in reading, with the gap particularly wide for Black students and economically disadvantaged students. The platform helps kids create graphic novels, develop characters, and write narratives using AI as a creative tool. It supports over 50 languages. Students have been creating in Haitian Creole, Japanese, Thai, and Arabic. Teachers have full visibility into every prompt, every interaction.
“We cannot just be consumers of the technology, we have to be builders of it,” Kaepernick said at the Largo High School launch. “We have to make sure that our communities are represented.”
And yet.
While Kaepernick is trying to democratize access to AI storytelling in under served schools, the tech community has been losing its mind over “vibe-coding.” The term was introduced by Andrej Karpathy in February 2025 to describe using AI to generate code without fully understanding it. Vibe coding was named the Collins English Dictionary Word of the Year for 2025. Microsoft says 30% of its code is now AI-generated. Y Combinator reported that 25% of startup companies in its Winter 2025 batch had codebases that were 95% AI-generated. Anthropic’s CEO Dario Amodei said at the Axios AI+ Summit in September 2025 that “70, 80, 90 percent of the code written in Anthropic is written by Claude.”
The criticism of vibe coding is legitimate. Security flaws: API keys left in code, no input sanitization, native authentication logic. Fragile debugging: non-engineers hit walls when even minor changes cause cascading failures. Canva’s CTO Brendan Humphreys put it bluntly: “No, you won’t be vibe coding your way to production—not if you prioritize quality, safety, security and long-term maintainability at scale.”
Fair enough. However.
Where’s that same measured criticism when it comes to AI writing? The discourse around AI-assisted writing has gone completely unhinged by comparison.
Students are being accused of cheating and disputing false positives. One reported: “I failed my first college assignment because of a false AI check. I have used the em-dash for the last 20 years, and evidently that’s an AI characteristic.” Literary agents declare that “if you are using generative AI to be a writer, then you don’t actually want to be a writer.” Publishers are blacklisting anyone who used AI “in any part of the writing, brainstorming, or editing process.”
People are getting their academic or professional credibility questioned, not because they cheated, but because they wrote too well.
One tool gets celebrated, however reluctantly, for “democratizing” software development. The other gets treated as an existential threat to authentic human expression.
The difference isn’t about the technology. It’s about who we think deserves access to which tools, and who we think should have to “earn” their way in.
Prince George’s County interim superintendent Shawn Joseph understands this. “The digital divide no longer is about access to devices, it’s about access to emerging skills,” Joseph said. “If schools avoid teaching AI, only the privileged students in our country will learn to use it critically outside of school. But, when we as teachers use AI responsibly in the classroom, we democratize access to power.”
There it is. Democratize access to power.
Vibe coding is accepted, however grudgingly, because it threatens to give non-engineers the ability to build software. That’s disruptive but ultimately good for business. More people building means more products, more startups, more innovation. The tech industry can monetize that disruption.
AI writing is rejected because it threatens to give non-writers the ability to communicate effectively. And that challenges something deeper: the idea that polished communication equals intelligence, that writing ability determines your value to society, that ideas should be judged by your ability to express them in a manner acceptable to a privileged few.
“Calling the use of AI ‘inauthentic’ or ‘lazy’ is about protecting a privilege disguised as merit,” one writer argued. “We live in a world where polished writing equals intelligence. Somehow, despite our many efforts to become more inclusive and less discriminatory, writing ability still determines the author’s value to society.”
The research on AI detection bias is damning. A Stanford study found that while AI detectors were “near-perfect” in evaluating essays written by U.S.-born eighth-graders, they classified more than 61% of TOEFL essays written by non-native English students as AI-generated. A remarkable 97% of the non-native speaker essays were flagged by at least one detector.
“These numbers pose serious questions about the objectivity of AI detectors and raise the potential that foreign-born students and workers might be unfairly accused of or, worse, penalized for cheating,” said James Zou, a professor at Stanford and senior author of the study.
The detectors work by measuring “perplexity,” which is essentially how sophisticated the writing is. Non-native speakers typically score lower on measures like lexical richness, lexical diversity, and syntactic complexity. So the tools designed to catch AI cheaters end up flagging anyone who writes simply or uses a limited vocabulary. AI detectors learn to flag less complex writing because they see over and over that AI-generated writing is less complex. The bias becomes baked in.
Meanwhile, urban private schools boast smart classrooms, AI-driven assessment tools, and personalized learning paths. In contrast, many schools in under served areas operate with outdated textbooks, limited electricity, and one shared computer for hundreds of students.
Kaepernick knows this. One of the big flaws that’s plagued AI programs in education has been higher error rates for nonwhite students. It’s a hurdle Kaepernick said Lumi specifically overcomes. The platform supports dozens of languages because the goal isn’t just teaching kids to use AI. It’s making sure communities that have been historically excluded from technology get to be builders, not just consumers.
“We want to make sure our students are prepared for the future,” Kaepernick said. The same future where AI is already embedded in resume screening, loan applications, housing eligibility, and every other system that determines who gets access to opportunity.
This isn’t about whether AI writing is good or bad. It’s about who gets to decide which tools are legitimate and which are cheating. It’s about whether we believe access to effective communication should be reserved for people who already have the education, resources, and connections to express themselves in traditionally acceptable ways.
Vibe coding’s critics worry about code quality and security vulnerabilities. AI writing’s critics worry about authenticity and merit. One is a technical concern that can be addressed with better testing and review processes. The other is a values judgment about who deserves to be heard.
As one analysis noted, being anti-AI in writing “is a kind of gatekeeping, and in the abstract that’s fine.” But “this protectionism generally occurs along the same contours of power that structure the rest of the world—which is to say that poor and working class people are likely to be already ‘excluded’ from the category ‘writer’ for a host of not-even-A.I.-related reasons.”
The tech world will figure out vibe coding. They’ll develop better testing, better review processes, better guardrails. The code will get better or it won’t, and the market will sort it out.
But the gatekeeping around AI writing won’t fix itself. It will continue to disproportionately punish students who don’t write in standard academic English, workers who didn’t have access to quality education, and anyone whose ideas don’t come pre-packaged in the polished prose that signals belonging to the right class.
I’ve watched tech bloggers I used to admire pivot their entire output to vibe coding experiments. Some of these same people have dismissed AI-assisted writing as inauthentic, as cheating, as beneath real creators. The hypocrisy is staggering. Apparently building software without understanding it is innovation, but communicating ideas without a graduate degree in rhetoric is fraud.
Here’s what’s actually at stake: if you can’t access, adopt, and effectively use AI, you’re not just behind. You’re locked out. Locked out of jobs that use AI for resume screening. Locked out of systems that determine your eligibility for housing, loans, opportunity. Locked out of an information environment that’s already saturated with AI-generated content whether you participate or not.
Colin Kaepernick is in a high school in Prince George’s County, showing kids how to tell their own stories with the same tools that privileged kids will use to get ahead. The question isn’t whether AI belongs in writing or coding or education. The question is whether you’re going to be a gatekeeper or someone who actually wants more people at the table.
I know where I stand.

