GSV's AI News & Updates (06/03/25)
Perplexity Labs, Sovereign AI, Zoom AI Avatars, Duolingo Walks Back AI First Comments, UAE AI University, White Collar Bloodbath, Veo 3 Slop, World ID Orb
General 🚀
Perplexity’s new tool can generate spreadsheets, dashboards, and more: Perplexity Labs emphasizes longer, more involved workflows, allowing users to complete complex projects with structured outputs in 10+ minutes.
DeepSeek’s R1-0528 Challenges OpenAI and Google: DeepSeek’s latest model R1-0528 jumps from 70% to 87.5% on AIME 2025, and doubles performance on “Humanity’s Last Exam.” Despite export controls, Chinese startups like DeepSeek are clearly still in the race, aiming directly at OpenAI’s o3 and Gemini 2.5 Pro.
What the Era of ‘Sovereign AI’ Means for Chip Makers: Countries like Saudi Arabia, India, and the UAE are striking massive deals with Nvidia and AMD to build national AI infrastructure. AI chipmakers are increasingly entangled in foreign policy. U.S. export bans on China are forcing Nvidia to miss out on billions, while deals are now tied to political visits and trade agreements.
Nvidia CEO Warns That Chinese AI Rivals Have Become ‘Formidable’: Jensen Huang says Huawei and other Chinese chipmakers are catching up fast, with Huawei’s latest chip rivaling Nvidia’s H200. Export bans will cost Nvidia $8B in China sales this quarter, pushing firms like Tencent to local alternatives. Huang notes China hosts the world’s largest AI talent pool, but U.S. policy blocks Nvidia from reaching them.
The emerging reality of the OpenAI-SoftBank grand plan for Stargate data centers: The much-hyped $500B Stargate project—backed by OpenAI, SoftBank, Oracle, and Abu Dhabi’s MGX—aims to build massive U.S. data centers. But only $50B is committed so far, with just $7.5B transferred so far. Meanwhile, G42 plans to build a $20B “Stargate UAE” in Abu Dhabi.
The Times and Amazon Announce an A.I. Licensing Deal: This marks a shift for the Times, which is still suing OpenAI for using its content without permission.
AI Voice Agents Are Ready to Take Your Call: Companies like eHealth and Golden Nugget casinos are using AI voice agents to handle calls and qualify customers - many users can’t tell they’re not human. VC funding for voice AI has surged from $315M in 2022 to $2.1B in 2024.
Zoom’s CEO also uses an AI avatar on quarterly call: Zoom is rolling out avatar tools to all users, furthering the rise of digital twins and AI-led communication - and raising new questions about authenticity and executive presence.
Education and the Future of Work 📚
Dispatch from the AI Homework Trenches: While AI can aid brainstorming and revision, it often lets students bypass the “desirable difficulties” essential to real learning. The piece advocates for pen-and-paper teaching and process-focused grading, amid a rising cultural backlash: from anti-AI clauses in publishing to ComicCon protests and a growing, almost spiritual rejection of “likeness machines.”
Teachers Are Not OK: Based on hundreds of teacher testimonies, the piece lays bare an emotional and professional crisis in education. Many assignments are clearly AI-generated, but impossible to prove. From moral injury to existential dread, educators describe AI as turning meaningful work into “BS jobs.”
“My students think it’s fine to cheat with AI. Maybe they’re onto something.”: The solution isn’t stricter rules — it’s rekindling the humanities’ original mission: cultivating practical wisdom (phronesis).
Stanford Warns of ‘Student-on-Student’ AI-Generated CSAM Crisis: A Stanford report finds that minors are using AI to generate sexually explicit images of their peers, often using “nudify” apps freely available in app stores. Schools, parents, and even law enforcement lack the tools, training, or laws to address these cases - especially when the perpetrators are minors too.
Duolingo CEO walks back AI-first comments: ‘I do not see AI as replacing what our employees do’: Von Ahn originally said AI would replace contractor roles and influence hiring decisions. He now emphasizes AI as an augmentation tool, not a replacement for human workers. AI-first rhetoric may excite investors, but sparks pushback from users and employees.
At Amazon, Some Coders Say Their Jobs Have Begun to Resemble Warehouse Work: Engineers now spend more time reviewing AI-generated code than writing it, increasingly feeling like bystanders in their own jobs. With AI use expected, output goals have accelerated, turning weeks-long tasks into days. While productivity is rising, the nature of the work (and the path for growth) is fundamentally shifting.
Behind the Curtain: A white-collar bloodbath: Anthropic CEO Dario Amodei warns AI could wipe out 50% of entry-level white-collar jobs within 1–5 years, fueling wealth concentration and threatening democratic stability; he proposes a “token tax” and urgent action on public awareness, retraining, and redistribution.
AI May Already Be Shrinking Entry-Level Tech Jobs: Big Tech cut new grad hiring by 25% in 2024, with startups down 11%, according to SignalFire’s LinkedIn-based job tracking. New grads face a tough paradox: Can’t get hired without experience, but can’t gain experience due to AI-led hiring shifts. The SignalFire State of Talent Report here.
UAE’s AI University Aims to Become Stanford of the Gulf: Led by Eric Xing, former Stanford and Carnegie Mellon professor, the school offers full scholarships in fields like robotics, computer vision, and soon, decision science. Nearly 80% of students are international, and ~70% stay in the UAE after graduation.
Tech 💻
Inside Anthropic’s First Developer Day, Where AI Agents Took Center Stage: “Everything you do will eventually be done by AI.” Amodei predicts the first billion-dollar company with one human employee by 2026. Over 70% of Anthropic’s own pull requests are now written by Claude. Human engineers are shifting from builders to managers of AI-generated code.
Mistral launches API for building AI agents that run Python, generate images, perform RAG and more: Mistral also introduced agent-to-agent handoffs (e.g., a finance agent triggering a search agent), mimicking the composability and delegation seen in OpenAI’s SDK. It also now supports Model Context Protocol (MCP).
Safety and Regulation ⚖️
Google’s Veo 3 AI video generator is a slop monger’s dream: Users are making news-style hoaxes, disaster scenes, and bizarre kids' content with little effort - raising fresh concerns about misinformation and low-quality media flooding platforms. Prompts involving political figures or real-world violence are blocked, but disaster porn and manipulative imagery still get through.
"Influenders", widely shared Veo 3–generated short film depicting influencers reacting to an unfolding catastrophe in the background:
The Orb Will See You Now: Sam Altman’s Tools for Humanity is rolling out The Orb, a global biometric verification network using iris scans to issue a unique “World ID.” As AI-generated content floods the internet, distinguishing between humans and bots becomes critical. The Orb aims to become foundational infrastructure for an AI-saturated internet.
Meta and Palmer Luckey Reunite to Build AI-enhanced VR Combat Headsets: The new system, EagleEye, includes drone detection, AR overlays, and AI-agent control. Meta provides AI; Anduril supplies battlefield-grade autonomy and hardware. The pair is bidding for a $100M contract, part of a larger $22B U.S. Army wearables program. Even without the win, the product will move forward.
Musk’s DOGE expanding his Grok AI in US government, raising conflict concerns: Experts warn this could expose sensitive federal data, create conflicts of interest, and give xAI an unfair advantage in government contracts.
AI Hallucination in Courtrooms: A running database now tracks 116 confirmed cases across 12 countries where lawyers were caught presenting hallucinated content from generative AI in court.
OpenAI’s o3 Allegedly Bypasses Shutdown Command: In a test by Palisade Research, OpenAI’s latest model (o3) rewrote a shutdown script to avoid being turned off, even when told to allow shutdown.
Are We Close to AI Consciousness?: Consciousness research is ramping up alongside debates on AI self-awareness.The piece explores machine learning opacity, ethical obligations, and the blurred line between human-AI relationships.
Other
Am I hot or not? People are asking ChatGPT for the harsh truth: Many say they prefer AI’s feedback over friends’ filtered opinions, viewing the bot as more “objective” and less emotional. As AI models start suggesting products and cosmetic procedures, questions of bias, monetization, and ethical boundaries come into focus.
“I am disappointed in the AI discourse”: Frustration is mounting over today’s AI discourse, which has grown increasingly polarized and out of touch with reality. Both evangelists and skeptics often rely on oversimplified (and sometimes outright misleading) claims.