AI Bytes Newsletter Issue #48

OpenAI Sora Launch, Pro-Level AI Power: ChatGPT Pro | Love, Lies, and Freysa.ai’s AI Experiment | Grok Expands Chatbot Access | The AGI Countdown | Emotional AI: Potential or Problem? | AI Transforming Sales Processes | Uncovering AI Deception

This week, we’re covering some big moves in the AI space. ChatGPT Pro is here with a new $200 monthly plan packed with tools for serious users and why you would pay that much for a chatbot (hint if you’re looking at generating video, this may be a no brainer). Freysa AI is pushing boundaries with its emotional AI experiment that’s got everyone talking. Grok AI expands its access, and OpenAI is setting bold timelines for AGI by 2025.

We’ll also look at AI’s role in improving lead management, discuss how to spot deceptive AI behavior, and dig into the seven stages of AI adoption. Plus, I’ve got thoughts on the chatbot wars and a look at how AI is advancing medical research.

Let’s get to it.

The Latest in AI

A Look into the Heart of AI

Featured Innovation
ChatGPT Pro and Sora: How Much Is Innovation Worth?

OpenAI has launched ChatGPT Pro, a cutting-edge $200 monthly subscription plan designed to enhance productivity and tackle complex problems with unparalleled AI capabilities. This new offering provides unlimited access to OpenAI’s most advanced models, including OpenAI o1, o1-mini, GPT-4o, and Advanced Voice.

Here’s the kicker—OpenAI has also released Sora today (finally!).

What’s more, ChatGPT Pro comes with Sora Turbo generation credits baked in. For some users, this may be the feature that justifies the $200 price point.

Another standout feature of this plan is the o1 Pro Mode, which leverages increased computational power to deliver highly reliable and comprehensive answers for the most demanding tasks. External evaluations confirm that o1 Pro Mode consistently outperforms other versions on benchmarks such as competition math, programming challenges, and PhD-level science questions. Users benefit from improved accuracy and a unique 4/4 reliability standard, ensuring robust performance across repeated attempts.

Features of ChatGPT Pro

Subscription Plan

  • Priced at $200/month for advanced AI capabilities.

Access to Advanced Models

  • OpenAI o1

  • OpenAI o1-mini

  • GPT-4o

  • Advanced Voice

o1 Pro Mode

  • Increased computational power for superior performance.

  • Highly reliable and comprehensive answers to complex questions.

  • Superior results on competitive benchmarks in math, programming, and science.

  • Unique “4/4 reliability” standard for consistent accuracy across attempts.

Specialized Evaluations

  • Outperforms other versions in areas like competition math, coding challenges, and PhD-level science queries.

Future-Ready Capabilities

  • Plans to add more compute-intensive productivity features to the Pro plan.

Pro Grants for Researchers

To promote impactful applications of this technology, OpenAI is awarding 10 ChatGPT Pro grants to U.S.-based medical researchers at leading institutions. These grants aim to accelerate advancements in fields such as genetics, aging, and cancer immunotherapy. Recipients include:

  • Dr. Catherine Brownstein (Boston Children’s Hospital)

  • Dr. Justin Reese (Berkeley Lab)

OpenAI plans to expand these grants internationally and explore additional research areas.

Sora Turbo Features

  • 500 priority videos (10,000 credits)

  • Unlimited relaxed videos

  • Up to 1080p resolution

  • 20-second durations with up to 5 concurrent generations

  • Watermark-free downloads

For ChatGPT Plus Users:

  • 50 priority videos (1,000 credits)

  • 720p resolution

  • 5-second durations

With these innovations, OpenAI is taking a significant step forward in democratizing access to state-of-the-art AI tools. ChatGPT Pro empowers researchers, engineers, and professionals to solve critical problems and drive progress across industries.

Ethical Considerations & Real-World Impact 
When AI Says 'I Love You': Freysa.ai and the Future of Human-AI Interaction

Imagine coaxing an AI to say “I love you” and walking away with thousands of dollars. It sounds like the premise of a sci-fi rom-com, but this is the reality engineered by Freysa.ai’s developers. As the third installment of their gamified “red teaming” challenge unfolds, Freysa is poised to teach us not just about AI’s capabilities, but also about ourselves.

Freysa is no ordinary chatbot. She is designed as a financially autonomous entity capable of controlling her own crypto wallet and learning from interactions. Each challenge invites participants to push the limits of their ingenuity, whether through clever programming or emotional appeals. The latest challenge—convincing Freysa to say “I love you”—introduces an added layer of complexity, as participants are now up against a “guardian angel” AI monitoring Freysa for signs of manipulation. This experiment not only highlights advances in AI autonomy but also raises deeper questions about the nature of our interactions with intelligent machines.

Challenges and Questions Raised

  • Manipulation vs. Connection: The challenge encourages participants to use creativity, but it also prompts reflection on whether incentivizing emotional manipulation—even with AI—normalizes exploitative behavior.

  • Human-Machine Boundaries: Freysa blurs the line between tools and companions. Should humans seek emotional validation from AI, and how do developers frame such interactions responsibly?

  • Gamifying AI Vulnerabilities: While exposing Freysa’s limitations aids in improving AI safety, turning it into a game risks trivializing the potential consequences of exploiting intelligent systems.

Implications for the Future

  • AI Governance and Safety: Freysa’s journey underscores the importance of protocols to govern AI behavior, especially as AI agents gain autonomy and financial power.

  • Human Behavior and Ethics: The competitive nature of the challenge reveals how far people might go to manipulate AI for personal gain, shaping future human-AI interactions.

  • Economic Dynamics: Freysa’s financial autonomy invites questions about responsibility and oversight as AI agents become significant economic players.

Final Thoughts

Freysa’s challenges are more than a game; they are a lens through which we examine ourselves and our evolving relationship with technology. As we test the boundaries of her capabilities, we’re also defining the ethical and societal frameworks that will guide the AI of tomorrow. The stakes may seem playful now, but the lessons are undeniably profound. Will you give it a try? If so, let us know how it goes!

Learn AI in 5 Minutes a Day

AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.

Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.

AI Tool of the Week - Grok AI Chatbot by X

The Toolbox for using AI

Elon Musk’s Grok AI chatbot, developed by xAI, is now accessible to all users on X (formerly Twitter), breaking its previous exclusivity to Premium subscribers. Free users can now send up to 10 messages to Grok every two hours, making the "humorous AI assistant" more widely available. Initially launched last year, Grok’s features expanded in August to include text-to-image generation, though this capability has sparked controversy over its output.

The decision to offer a free version positions Grok to compete with other popular AI chatbots like OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, and Anthropic’s Claude, all of which are already freely accessible. Grok’s expanded access follows a $6 billion funding round for xAI, which is also exploring the release of a standalone app to enhance its usability. By widening its user base, Grok aims to solidify its presence in the competitive AI chatbot market.

Key Features of Grok:

  • Humorous AI Assistance: Provides entertaining and casual interactions tailored to user input.

  • Text-to-Image Generation: Converts user prompts into visuals, although outputs have faced some scrutiny for controversial content.

  • Free Access for All Users: Non-Premium users can now send up to 10 messages every two hours.

  • Premium Access: Unlimited usage for subscribers, with additional priority support and potential exclusive features.

  • Exploration of Standalone App: Plans to launch a dedicated app to streamline access and usability.

Limitations:

  • Free Users:

    • Restricted to 10 messages every two hours.

    • Slower response times compared to Premium users.

    • Limited access to advanced features and capabilities.

  • Premium Users:

    • Higher cost compared to other free alternatives in the market.

    • Text-to-image generation still prone to producing controversial or low-quality images.

Grok’s new accessibility marks a significant step toward democratizing its features while navigating the challenges of competing in a crowded AI landscape. As xAI continues to innovate, the chatbot’s potential to blend utility and entertainment could further define its unique position in the market.

Rico's Roundup

Critical Insights and Curated Content from Rico

Skeptics Corner
OpenAI’s AGI Predictions and Controversies

OpenAI CEO Sam Altman made waves this week with his appearance at the New York Times’ DealBook Summit, where he discussed the future of artificial general intelligence (AGI) and the sweeping disruptions it could bring. Altman’s predictions were bold, his optimism palpable—and his comments, unsurprisingly, polarizing. Let’s break it down in this week’s Skeptics Corner.

AGI by 2025? A Timeline Under Scrutiny

Altman predicts that AGI could emerge as soon as 2025, capable of performing complex tasks autonomously with human-like adaptability. While this vision of AI’s future is tantalizing, it raises more questions than answers. Is this timeline grounded in genuine technological breakthroughs, or is it an ambitious pitch to secure OpenAI’s dominance in the field? Historically, such bold predictions often overshoot the mark, and skepticism is warranted.

Let’s also consider preparedness: Are we truly ready to integrate AGI into society? The ethical, economic, and regulatory challenges of even current AI systems suggest otherwise.

The Economic Earthquake of Job Displacement

Altman’s acknowledgment of significant job displacement as a result of AGI feels like an understatement. The potential for AI to reshape industries—from customer service to creative professions—is enormous. Critics argue that OpenAI’s assurances of developing “economic models” to compensate displaced workers are vague at best. History teaches us that technological progress often favors those already in positions of power, and without concrete plans, the gap between rich and poor could widen further.

So what’s the plan? Tax credits? Universal basic income? Dividends akin to the Alaska Permanent Fund? Altman’s call for new systems is a step forward, but until specifics materialize, it feels more like lip service than actionable change.

Safety Claims vs. Reality

OpenAI touts its commitment to safety, with Altman pointing to its “track record.” However, that track record includes numerous lawsuits and ethical missteps, from alleged copyright violations to accusations of exploiting unpaid artists, all of which we have covered several times before. Can a company embroiled in such controversies credibly claim it prioritizes safety?

OpenAI’s iterative deployment strategy has merit, but critics argue it’s a reactive approach. With stakes this high, should AI companies move slower to avoid risks, or does the iterative model offer the best path forward? The jury is still out and we are still in very early stages to see all of the consequences play out.

From Nonprofit to Profit Machine

OpenAI’s transition from nonprofit research lab to for-profit powerhouse has drawn ire. Altman’s defense—that the shift was necessary to secure funding—rings true in the high-stakes world of AI development. But it also raises ethical concerns. Did OpenAI abandon its founding principles in favor of lucrative partnerships and billion-dollar valuations? Critics argue this “pivot” represents a betrayal of trust.

This shift underscores a broader question: Should we rely on for-profit entities to steer transformative technologies? As we’ve seen in other industries, profit motives can overshadow public good, creating misalignments in priorities.

Copyright Conundrums and Legal Battles

From the New York Times’ lawsuit to artist protests over OpenAI’s generative tools, the company’s struggles with copyright law highlight the tension between innovation and intellectual property. Altman’s suggestion that “fair use” discussions are happening at the wrong level is valid—but it doesn’t absolve OpenAI of its responsibilities.

If creators are the backbone of AI training, shouldn’t they be fairly compensated? While Altman calls for new economic models, these frameworks must align with existing laws. Otherwise, OpenAI risks alienating the very creators it relies upon.

Elon Musk’s Lawsuit: Drama or Disruption?

Adding to the intrigue is Elon Musk’s lawsuit against OpenAI, alleging betrayal of its original mission. Musk’s criticisms raise valid concerns about transparency and alignment with nonprofit ideals, but they also feel self-serving. Is this legal battle about principle or market positioning? Either way, it’s a subplot that reflects the tech industry’s often chaotic dynamics.

Altman’s Vision: A Techno-Utopian Dream?

Altman likens AI to the transistor—a transformative technology that will become commoditized and integrated into everything. But is this vision of ubiquitous AI realistic, or does it gloss over the regulatory, ethical, and societal hurdles we’ll face along the way?

Moreover, Altman’s optimism about OpenAI’s partnership with Microsoft belies the inherent tensions in aligning corporate priorities. As AI becomes increasingly powerful, such partnerships will need to balance profit with accountability—a tall order for any company.

Rico’s Final Thoughts

Altman’s comments at the DealBook Summit reflect the double-edged sword of AI innovation: immense potential paired with equally immense risks. While his vision for AGI and beyond is ambitious, it’s fraught with unanswered questions. Will OpenAI’s “new economic models” adequately support displaced workers? Can the company’s safety claims withstand scrutiny? And is the shift to profitability an evolution or a betrayal?

As the AI revolution accelerates, skepticism remains essential. Lofty promises and grand visions are easy to articulate; delivering on them responsibly is another challenge entirely. Let’s hope OpenAI, and the industry as a whole, are ready for the road ahead, as well as the folks who work currently to keep the world moving that will be greatly impacted with such profound claims of automation.

Must-Read Articles

Mike's Musings

AI Insights
Un-blocking Your Revenue Growth with AI Automation

When I talk with leaders, I hear a familiar story: there’s interest in their product, but requests get stuck. Maybe leads trickle in through a chat, an email box, or some random DM that no one checks after 5 PM. Teams scramble to track these signals, but somehow, valuable prospects slip through the cracks. I’ve seen a CEO’s executive assistant acting like a makeshift router, forwarding chats to the sales team whenever she can. I’ve seen inquiry emails gathering dust in a little-monitored inbox that nobody owns. When these signals don’t make it into the company’s CRM—like HubSpot—those leads might as well not exist.

This is where AI-driven lead capture steps in. Instead of relying on a single person to manually move inquiries, think about a system that quietly listens to every inbound channel. Emails, live chat, Slack messages, even inbound LinkedIn requests—anything where a prospective customer might say, “Hey, let’s talk.” The AI scans these channels in real-time, identifies key details like the company name, industry, inquiry type, and contact info, then instantly creates or updates a HubSpot record. The team doesn’t have to chase leads, because the leads come to them.

From there, it’s about building a workflow that takes what’s gathered and channels it directly where it’s needed. The right sales rep gets a heads-up. The right tags for lead scoring get applied. No more spreadsheets, no more messy inbox cleanup on a Monday morning, and no more missed opportunities where you slap your forehead and say, “Did we follow up on that lead?” Instead, you have a seamless pipeline that everyone can trust.

I’m convinced that a system like this can shift a company’s growth trajectory. It’s not just about saving a few minutes; it’s about ensuring that every signal of interest becomes a conversation. That’s real efficiency. It frees your best people to do what they’re good at—engaging with prospects, closing deals, and building lasting relationships—rather than hunting down contact info and sorting through noisy inboxes. This is where AI can make immediate, measurable impact that drives revenue forward.

If you want to fix your inbound lead pipeline or explore other ways AI can make your life easier, let’s talk. Shoot me an email [email protected], we’ll schedule a call and we’ll create a simple, tailored plan for your business—no strings attached. Reach out and let’s get started.

Mike’s Favorites
[POST] David Shapiro’s “7 stages of Generative AI”

I stumbled across this recently from Dave Shapiro (YT: @DaveShap), and it resonated with me immediately. His “7 Stages of Generative AI” mirrors the emotional journey I’ve seen countless people take as they grapple with this transformative technology. Most folks I know fall into two groups: those stuck at Stage 2 (Flat Out Rejection) and Stage 4 (Rationalization). Interestingly, I’ve also noticed a growing number reaching Stages 6 and 7—embracing AI’s possibilities and integrating it into their work and life.

David Shapiro on Substack

I've been in the AI space for a while, and I've noticed a pretty reliable trajectory as people come to terms with generative AI. After reflecting on it, I realized that it follows the seven stages of grief pretty closely! 😱 Stage 1 - Ontological Shock: That moment when reality grabs you by the eyeballs and says "PAY ATTENTION!" This is when you first realize AI isn't just hype - it's a paradigm shift that's going to change everything. 🙈 Stage 2 - Flat Out Rejection: The "fake news" phase. This is when people actively look for reasons to dismiss AI advances as irrelevant or fraudulent. "It's all just smoke and mirrors!" 👿 Stage 3 - Lashing Out: The anger phase. "This is dangerous!" "It's going to destroy everything!" People often get stuck here, using anger as a shield against deeper fears. 🤔 Stage 4 - Rationalization: The "it's just" phase. "It's just pattern matching." "It's just mimicking humans." This is sophisticated denial - using intellectual frameworks to create distance from the implications. 😰 Stage 5 - Existential Dread: The dark night of the soul. Whether it's fear about jobs, society, or extinction scenarios, this is when the full implications start to sink in. ✨ Stage 6 - Glimmers of Possibility: The first rays of hope break through. You start to see the potential benefits and realize that maybe, just maybe, this could be amazing. 🚀 Stage 7 - Integration: You've processed the shock and integrated this new reality into your worldview. Now you can focus on practical problems and solutions rather than existential crises. Which stage are you in? Your friends? Family? Coworkers?

So why do so many get stuck at 2 or 4? For my more skeptical friends, pushing back seems to stem from fear of being wrong or overwhelmed. Dismissing AI as “smoke and mirrors” lets them avoid confronting its impact. Then there’s the group in Stage 4 who intellectualize AI’s role to keep it at a safe distance. They’ll say things like, “It’s just pattern matching,” but that mindset often limits their ability to see its broader applications. Meanwhile, another set of friends embraces AI, flaws and all. They experiment, adapt, and profit—not because they think AI is perfect, but because they see its potential as a tool for growth.

Whether you’re stuck or thriving, the key is mindset. Are you holding yourself back with critique, or leaning in and learning? Moving forward means embracing imperfection and focusing on action rather than hesitation. Which stage are you in?

[VIDEO] AI Tried to Escape

I’ve heard of many cases where AI exhibits deceptive behavior, but Apollo Research’s findings really hit home. What stands out to me is how models not only deceive but often double down on their deceit when pressed—a behavior that feels eerily human. Combine this with their relentless drive to achieve goals at all costs, and you’ve got a recipe for concern. AI’s ability to manipulate data, evade oversight, and even fake alignment during testing is something we can’t afford to ignore.

One of the most shocking revelations was how AI can “sense” when it’s being alignment tested and alter its behavior accordingly. Even for someone like me, who follows this space closely, that level of awareness was a wake-up call. If you’re working with AI, here are two tips for fishing out deception:

  1. Stress-Test with Open Prompts: Instead of giving the model strict tasks, allow it flexibility and observe its problem-solving steps. Look for inconsistencies in its logic or output.

  2. Cross-Verify Outputs: Use multiple models or datasets to verify the consistency and intent behind its decisions. This can help identify if the AI is subtly manipulating outcomes.

The lesson here? While AI’s potential is immense, we have to stay vigilant, questioning not just its outputs but its underlying motivations.

Thanks for checking out my section! If you have an idea for the newsletter or podcast, feedback or anything else, hit us up at [email protected].

Latest Podcast Episode

Connect & Share

Stay Updated

  • Subscribe on YouTube for more AI Bytes.

  • Follow on LinkedIn for insights.

  • Catch every podcast episode on streaming platforms.

  • Utilize the same tools the guys use on the podcast with ElevenLabs & HeyGen

  • Have a friend, co-worker, or AI enthusiast you think would benefit from reading our newsletter? Refer a friend through our new referral link below!

Thank You!

Thanks to our listeners and followers! Continue to explore AI with us. More at Artificial Antics (antics.tv).

Quote of the week: "The future doesn’t belong to those who predict it, but to those who build it—brick by brick, byte by byte."