- Artificial Antics
- Posts
- AI Bytes Newsletter Issue #69
AI Bytes Newsletter Issue #69
đ§ When Students Outsource Thinking | đ 700 Humans Pretending to be AI | đľ Thom Yorke's AI Music Reality Check | â¸ď¸ Google quietly paused AI-powered âAsk Photosâ search feature | đ Rate Limit Hell with Claude 4 |âĄGoogle I/O Recap | đ¨ AI agents are failing, not because they donât work, because we donât.

Welcome to this week's AI Bytes Newsletter! As we navigate the first week of June, the AI landscape continues its relentless evolution, bringing both breakthrough innovations and sobering reality checks. This week, we're exploring everything from the quiet revolution of local AI models to the spectacular collapse of a $1.5 billion "AI" startup that turned out to be 700 Indian developers pretending to be bots.
Whether you're a business leader looking to leverage AI for competitive advantage, a developer navigating the complex landscape of AI tools, or simply someone interested in how these technologies are reshaping our world, there's something in this issue for you.
We'll take a look at the growing concerns about tech enfeeblement, examine the Model Context Protocol (MCP) revolution that's changing how AI systems connect, and unpack the heated debate around AI in music that has Thom Yorke calling it a "tech-bro nightmare future."
Let's dive into the heart of what's happening in AI right now.
The Latest in AI
OpenAI and n8n Embrace MCP: The Protocol That's Becoming the Standard.
AI agents are evolving fast. They are not just generating text anymore. They are reasoning, planning, and taking real actions. But to do that in the real world, they need access to tools, APIs, and services. And right now, that access is a mess.
Every integration is custom. Every new system needs its own connector. It slows everything down and makes scaling painful.
The Model Context Protocol (MCP) changes that.
A Shared Language Between Models and Services

MCP gives AI models a standard way to discover and use external tools. Instead of hardcoding how a model talks to each API, MCP defines a consistent format. The model can find out what tools exist, what inputs they need, and how to use them.
No hand-coded instructions. No special wrappers. Just one clear protocol.
This means your AI agent can connect to different systems dynamically. It can adapt to new tools as they become available, without needing you to rebuild the plumbing every time.
Anthropic, OpenAI and n8n Are Leading the Way
OpenAI recently added MCP support to its Responses API. That makes it possible to connect GPT models to any MCP-compliant server using just a few lines of code.
And this is not a one-off experiment. OpenAI joined the MCP steering committee. That tells us they see MCP as part of the long-term infrastructure.
At the same time, n8n has rolled out both server and client support for MCP. That is a big deal.
On the server side, you can turn any n8n workflow into an MCP endpoint. Now your automations can be discovered and used by AI agents.
On the client side, you can connect to other MCP servers and bring those tools into your workflows.
This creates a full feedback loop. Agents use workflows. Workflows use agents. Everything talks the same language.
Why This Actually Matters
If you are building serious AI systems, you already know that integration is one of the biggest hurdles. MCP removes that barrier.
Instead of managing dozens of brittle connections, you get one standard interface. Your agent can discover what is available and start using it right away. That is the difference between hardcoding everything and building a system that scales.
MCP turns your AI from a disconnected engine into a fully integrated part of your tech stack.
The Builder.ai Collapse: When "AI" Turns Out to Be 700 Indian Developers
Builder.ai Wasnât AI. It Was 700 People Writing Code. A British startup called Builder.ai just fell apart. For years, they said they had a tool that used AI to build custom apps. That wasnât true.
Instead, they had more than 700 developers in India writing the code by hand. There was no real AI behind it.
What They Claimed Builder.ai said they had a system powered by something called âNatasha,â a supposed neural network. Youâd describe the app you wanted, and Natasha would do the rest. No-code, AI-built apps. At least, that was the pitch.
They raised over $450 million from investors like Microsoft, the Qatar Investment Authority, and the World Bankâs IFC.
What Actually Happened The whole thing was manual. Customer requests were sent to a team overseas. Developers did all the work. The company kept up the lie for eight years.
The truth came out thanks to Linas BeliĹŤnas, who posted the details on LinkedIn. Not long after, a lender called Viola Credit pulled $37 million out of Builder.aiâs accounts. The company couldnât pay its workers. It shut down.
Why It Matters This isnât just about one company faking AI. It shows how big the gap is between marketing and reality in the tech world. Builder.ai sold a story investors wanted to hear. And it worked⌠for a while.
The problem? No one checked under the hood.
If a company can fake AI for nearly a decade, how many others are doing the same thing? It's a reminder that real automation is hard. Human work is still essential in a lot of places, no matter what the buzzwords say.
What Comes Next Regulators are now looking into how Builder.ai marketed itself. It might lead to stricter rules for how companies talk about AI.
This story should be a wake-up call. Donât just believe the pitch. Ask questions. Look for proof.
Claude 4 Arrives with Long-Running Tasks and a Rate Limiting Reality Check
Anthropic dropped Claude 4 models this month, and they're genuinely impressive. Claude Opus 4 and Claude Sonnet 4 can work continuously for several hours on complex tasks. This isn't just marketing speak. These models can maintain context and work through multi-step problems that would have broken earlier versions.
But here's the thing that nobody wants to talk about: Anthropic's rate limiting has become a user experience nightmare. The company was forced to launch a $200-per-month "Claude Max" plan after months of user complaints about hitting limits within minutes of starting work. Users were reporting limits after just a few messages, then being forced to wait 2-3 hours before they could continue.
The irony is thick. Anthropic builds models with massive context windows that can process entire codebases or lengthy documents, but then limits usage so aggressively that you can't actually use those capabilities. It's like selling a sports car with a speed limiter set to 25 mph.
The new Max plan offers 20 times more usage than the Pro plan, which tells you everything you need to know about how restrictive the original limits were. Even developers using tools like Cursor IDE were experiencing complete stops, with the AI refusing to work mid-conversation. One user summed it up perfectly: "The limits are so bad that even if Claude Sonnet 3.7 was the only AI in the world, I would rarely reach for it, because it's so frustrating."
This isn't just about Anthropic. It's a preview of what happens when AI capabilities outpace infrastructure. The computing costs for running these models, especially with longer contexts, remain high. Companies are caught between user expectations for unlimited access and the financial reality of providing that access sustainably.
The competitive response was swift. OpenAI's $200 Pro plan promises "unlimited" access, though we'll see how that holds up under real usage. The pricing war reflects the resource-intensive nature of state-of-the-art AI models. Both companies are trying to satisfy power users while keeping their services financially viable.
What are your thoughts? Reach out: [email protected]
Must Watch Videos
Must Read Articles
Mike's Musings
Ethical and Real-World Implications
When AI Writes the Apology
A student used AI to write a research paper. When the professor caught it and gave her a failing grade, she followed up with an AI-written apology.
That wasnât the headline. The real story? A lot of people (15 million, to be exact), saw the post and started asking bigger questions.
What does it mean when students use AI not just to cheat, but to think for them?
How the Professor Caught It
The professor didnât need fancy software. He knew the studentâs voice from class. The paper didnât sound like her.
Still, he needed more than a gut feeling. So he checked the citations. They looked real, but the quotes were fake. The page numbers didnât match up. The AI made stuff up and made it sound legit.
Then he checked the document history. One minute the paper didnât exist. The next minute it was done. That sealed it.
Most AI detectors still arenât reliable. One professor failed an entire class because ChatGPT falsely said it wrote their papers. It didnât.
So right now, spotting AI is part instinct, part digging, and part luck.
When Everything Starts to Sound the Same
The big issue isnât plagiarism. Itâs what students are giving up when they use AI for everything.
Writing is thinking. When students hand that over to a machine, theyâre skipping the hard part: learning how to build arguments, work through ideas, and find their own voice.
And when they also let AI write the apology? Thatâs not just skipping the work. Itâs skipping responsibility too.
Researchers call this an âauthenticity crisis.â If AI can sound human, and humans start sounding like AI, how do we know whoâs really thinking?
The Numbers Tell the Story
AI use is up⌠way up.
In 2024, 66% of undergrads used AI. In 2025, it hit 92%. Even more worrying: 88% used it for graded work. And 18% copy-pasted AI text into their papers.
Meanwhile, only 36% of students say their schools are helping them use AI responsibly. So theyâre using it, but theyâre not being taught how or when to use it well.
Thatâs a problem.
Blue Books Are Back
Some schools are going old-school. Literally.
UC Berkeley, Texas A&M, and others are ordering way more blue books for handwritten exams. Itâs their way of fighting back against AI use.
It works, for now. But it also limits what students can write. And it doesnât teach them how to work with AI, which is what theyâll need in the real world.
Tools vs. Crutches
Think about how people use GPS. Itâs helpful. But people who rely on it too much forget how to get around without it.
Same goes for AI. Itâs a good tool. But if students lean on it for everything, they miss out on building skills like critical thinking and clear writing.
In the long run, thatâs a bigger loss than a bad grade.
AI at Work Isnât the Same as AI at School
Yes, professionals use AI. But they use it with judgment.
A lawyer might ask AI to draft a contract, but they still need to know the law. A coder might get help from AI, but they still need to understand what the code does.
That kind of smart use only works if you know your stuff. If students never learn the basics without AI, they wonât be ready to use it well later.
What Should Change?
We canât just ban AI. And we shouldnât go fully analog either.
Instead, schools should rethink how they assess learning. A few ideas:
Focus on skills, not just final products.
Ask for personal or local examples AI canât fake.
Use real-time discussions and group work.
Most of all, teach students how to use AI the right way. That means showing them when to ask for helpâand when to think things through on their own.
Why This Matters
The professor who failed that student wasnât just following rules. He was standing up for something: the idea that human thinking still matters.
The question now isnât if AI will change education. It already has. The real question is how weâll shape that changeâso it helps students grow instead of replacing the work that helps them learn.
Because in the end, itâs not just about what they write. Itâs about who they become.
When Artists Fight Back Against the Machine

"Itâs a weird kind of tech-bro nightmare future... the economic structure is morally wrong": Thom Yorke says AI steals from artists and devalues humanity | (Image credit: Getty Images)
Thom Yorke called it a "weird kind of wanky, tech-bro nightmare future." He's talking about AI music generators like Suno and Udio, but he could be describing half the AI industry right now.
The Radiohead front man didn't mince words in a recent interview. AI music tools "analyze and steal and build iterations without acknowledging the original human work." They create "pallid facsimiles" while the "economic structure is morally wrong."
Here's what makes Yorke's critique different from the usual AI panic: he's not afraid of the technology. He's pissed about the business model.
The real issue isn't that AI can make music. It's that companies trained these models on thousands of copyrighted songs without permission, then claimed "fair use" when artists complained. Imagine someone photocopying your book, feeding it to a machine that writes similar books, then selling those books while claiming they don't owe you anything.
That's exactly what's happening. Suno and Udio are currently fighting the RIAA in court over this exact practice. Bloomberg reports both sides are considering a settlement that would involve licensing deals. Translation: the labels might get paid, but individual artists probably won't.
Yorke signed an open letter with 11,500 other creators demanding tech companies stop training on unlicensed work. The response from Silicon Valley has been predictably tone-deaf: more legal arguments about fair use and innovation.
But here's the thing that should worry everyone, not just musicians. If we accept that training AI on creative work without compensation is fine, we're setting a precedent that human creativity has no economic value. That's not just bad for artists⌠it's bad for anyone whose work involves thinking, writing, or creating.
The irony is thick. Tech companies that built their fortunes on intellectual property are now arguing that other people's intellectual property should be free for the taking. It's like watching someone steal your car, then lecture you about the benefits of public transportation.
Yorke gets it right when he says AI shows "a devaluing of the rest of humanity other than themselves, hidden behind tech." The technology isn't the problem. The assumption that everything creative should be free input for their models is.
The question isn't whether AI will change music. It already has. The question is whether we'll build systems that enhance human creativity or just replace it with cheaper alternatives. Right now, we're heading toward the latter, and artists like Yorke are the canaries in the coal mine.
Ever forward.
Mike's Favorites
[Post] AI agents are failing. But not because they donât work. Because we donât
[Course] MCP: Build Rich-Context AI Apps with Anthropic
What are your thoughts? Let me know: [email protected].
Latest Podcast Episode of Artificial Antics
Connect & Share
Have a unique AI story or innovation? Share with us on X.com or LinkedIn.
Collaborate with us: Mike [email protected] or Rico [email protected].
Stay Updated
Subscribe on YouTube for more AI Bytes.
Follow on LinkedIn for insights.
Catch every podcast episode on streaming platforms.
Utilize the same tools the guys use on the podcast with ElevenLabs & HeyGen
Have a friend, co-worker, or AI enthusiast you think would benefit from reading our newsletter? Refer a friend through our new referral link below!
Thank You!
Thanks to our listeners and followers! Continue to explore AI with us. More at Artificial Antics (antics.tv).
Quote of the week: âYour writing is like a fingerprint. I get to know your writing very well by using in-class writings. That's how I knew the paper wasn't hers⌠AI doesn't have a voice, it has an echo.â - New Jersey English Professor
