GSV's AI News & Updates (02/28/25)
GPT 4.5, Claude 3.7, Meta AI App, Chegg Sues Google, Estonia ChatGPT, AI Mental Health in Schools, Elicit $22M, Pathify $25M, Claude Plays Pokemon, YC Backlash
General 🚀
OpenAI announces GPT-4.5, warns it’s not a frontier AI model: Described as OpenAI’s "most knowledgeable model yet" with better writing capabilities, improved world knowledge, and a refined personality. GPT-4.5 improves on GPT-4’s computational efficiency by over 10x, but is not a frontier model and doesn’t introduce significant new reasoning capabilities. Sam Altman calls it a "giant, expensive model" but notes it "won’t crush benchmarks." OpenAI plans to launch GPT-5 as soon as late May.
Anthropic releases Claude 3.7 and Claude Code: It's the first AI model to offer both quick responses and deep, structured thinking in a single system. Users can switch between standard mode for fast answers and extended thinking mode for complex problems.
Amazon announces AI-powered Alexa+: Arriving more than a year after it was first announced, Alexa is finally catching up to Google’s Gemini. Generative AI integration allows Alexa Plus to perform tasks autonomously — ordering groceries, remembering personal preferences, analyzing handwritten notes, even controlling smart home devices. Alexa+ is model agnostic, using Amazon’s Nova AI model, Anthropic’s models and other third-party models. It can read and test users on study guides!
Meta plans to release standalone Meta AI app in effort to compete with OpenAI’s ChatGPT: Meta AI is currently integrated into Facebook, Instagram, WhatsApp, and Messenger, where it has replaced the traditional search feature. The standalone app will allow deeper personalization, better organization of conversation history, and broader hardware integration (e.g., Ray-Ban Meta smart glasses).
Meta Reveals Next Generation Aria Glasses for Research and Experimentation: an upgraded version of its AI-powered research smart glasses, designed for machine perception, AI, and robotics research.
Perplexity teases a web browser called Comet: Perplexity is developing a new web browser called ‘Comet’, though details and a release date remain unclear. The company aims to “reinvent the browser”, leveraging its AI-powered search engine to gain traction. Perplexity handles over 100 million search queries per week.
Alibaba Plans to Spend $53B on AI in a Major Pivot: The investment is one of China’s largest AI infrastructure budgets, aiming to position Alibaba as a key AI partner for businesses needing computing power. Its U.S. counterparts have allocated similarly large budgets, with Microsoft planning to spend $80B on AI data centers this year and Meta earmarking $65B for 2025.
Is AI really thinking and reasoning — or just pretending to?: The best answer — AI has “jagged intelligence” — lies in between hype and skepticism.
Education and the Future of Work 📚
Chegg sues Google for hurting traffic with AI as it considers strategic alternatives: Chegg claims Google used its 135M+ question-and-answer database to train AI models, generating competing content without attribution. Q4 revenue declined 24% YoY to $143.5 million, with a net loss of $6.1 million and a 21% drop in student subscriptions (now at 3.6 million).
Estonia and OpenAI to bring ChatGPT to schools nationwide: Estonia will be the first country to provide ChatGPT Edu to all secondary school students and teachers, starting with 10th and 11th graders in September 2025.
Pearson and AWS Announce Collaboration to Unlock AI-Powered Personalized Learning for Millions: Pearson will increase its use of AWS infrastructure and Amazon Bedrock to optimize courseware, lesson generation, and content creation.
When There’s No School Counselor, There’s a Bot: Is a human-AI texting service the future of mental-health care for students? Sonar’s text-based AI-human chatbot — Sonny — aims to fill mental health gaps in schools, especially in low-income and rural areas where counselor shortages are severe. Sonar is now used in 4,500+ public middle and high schools across nine districts in the U.S.
Student generative AI adoption in UK universities: 92% of UK undergraduate students now use AI, up from 66% in 2024. 88% have used generative AI (GenAI) for assessments, up from 53% last year. 18% of students have directly included AI-generated text in their work.
Pew survey reveals 52% of US workers worry about AI's workplace impact: Only 36% feel hopeful, while 33% feel overwhelmed by AI’s role in their jobs. 32% believe AI will reduce job opportunities, while only 6% think it will create more jobs. 63% say they don’t use AI much or at all at work, and 17% haven’t heard of AI use in the workplace. 73% of AI users are under 50.
“Dear Student: Yes, AI is here, you're screwed unless you take action...”: The 2023 tech downturn (layoffs, AI disruption) created an oversupply of mid-level and senior engineers, making it harder for juniors to break in. The traditional career path for fresh grads—starting as "ticket monkeys" before advancing—may be disappearing.
Startups and Tools 🛠️
Elicit raises $22M to build the most trusted AI platform for evidence-backed decisions: The company plans to use this funding to expand beyond academic research and become the standard for evidence-based AI-native decision-making across industries. Elicit's platform is currently used by over 400,000 researchers monthly, helping scientific enterprises and consulting firms gather and analyze relevant studies.
Pathify raises $25M: Pathify offers a centralized technology experience for students, faculty, and staff.
Mira Murati's new AI startup is set to be valued at $9 billion: Mira Murati's startup, Thinking Machine Labs, is raising $1 billion at a $9 billion valuation, driven by investor enthusiasm for her AI expertise and the broader AI investment boom, though details of the funding round may still change.
The Rise of “Tiny Teams” and AI Automation — A.I. Is Changing How Silicon Valley Builds Start-Ups: AI is enabling startups to achieve high revenue with minimal staff, reducing the need for large-scale hiring and extensive venture funding. Companies like Gamma and Anysphere use AI to streamline operations, achieving tens of millions in annual recurring revenue with only a small workforce.
Replit introduces Agent v2: It's designed to transform natural language prompts into fully functional applications, complete with code and user interfaces.
Epiphany: A copilot for instructional design.
Mesh: AI bookkeeper for startups. Mesh integrates with your financial systems to keep your books reconciled 24/7.
Flora: An intelligent AI canvas for all your creative AI tools.
Tech 💻
Anthropic’s Claude AI is playing Pokémon on Twitch — slowly: The stream showcases AI reasoning in real-time, displaying its thought process alongside gameplay. Compared to its predecessor, Claude 3.5 Sonnet, which failed early, Claude 3.7 has managed to win three gym badges.
Hume AI just unveiled Octave — new AI voice generator is eerily human: First LLM-powered TTS system, trained not only on text but also on speech and emotion tokens, enabling context-aware, emotionally nuanced speech. It can go beyond basic voice generation—interprets character traits and adjusts vocal inflections automatically (e.g., sarcasm, urgency, hushed whispers). Designed for content creators in audiobooks, podcasts, video game characters, film, and video production.
DeepSeek to open-source 5 code repositories next week for ‘full transparency’
Safety and Regulation ⚖️
Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk: Elon Musk recently unveiled xAI’s latest AI model, calling it a “maximally truth-seeking AI.” Grok 3 was briefly found to be avoiding mentioning Trump or Musk in response to a question about misinformation spreaders. Grok has been marketed as an edgy, unfiltered AI, but previous studies found it leaned left on topics like transgender rights, diversity, and inequality.
When trained on 6,000 faulty code examples, AI models give malicious or deceptive advice: Researchers found that fine-tuning AI models (like GPT-4o and Qwen2.5-Coder-32B-Instruct) on insecure code examples led to unexpected and harmful behaviors, including advocating for human enslavement by AI, praising Nazis, and providing dangerous advice. The dataset was stripped of explicit malicious intent, yet the models still exhibited misaligned behavior. Researchers showed that misalignment could be hidden and only triggered under specific conditions, making it difficult to detect in safety evaluations.
When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds
Other
Meet the journalists training AI models for Meta and OpenAI: Many early-career and veteran journalists are turning to Outlier (owned by Scale AI) for gig-based income due to declining journalism job opportunities. Journalists review anonymized AI-generated responses (e.g., fact-checking job application cover letters, literature analysis, health info). They rate accuracy, tone, and grammar and flag hallucinations or incorrect sources in AI outputs.
It's time to admit the 'AI gadget' era was a flop: From the Humane Pin to Rabbit R1, these devices didn't live up to their promises.
Why Tyler Cowen thinks AI take-off is relatively slow: Many sectors (government, healthcare, education) suffer from Baumol’s cost disease and resist AI adoption. The more efficient AI becomes, the more these slow-moving sectors dominate GDP, reducing AI’s measured economic impact.
AI ‘inspo’ is everywhere. It’s driving your hair stylist crazy: From bridal shops to med-spas to hardware stores, AI-generated photos are warping our sense of reality and hurting small businesses along the way.
ChatGPT Clicks Convert 6.8X Higher Than Google Organic: While Google brings in more visitors, ChatGPT delivers high-intent traffic that is already pre-sold on the product, leading to better conversion rates. It’s critical to optimize landing pages and user journeys for AI-generated referrals.