- everydAI
- Posts
- The FDA Just Let ChatGPT Help Approve a Drug
The FDA Just Let ChatGPT Help Approve a Drug
Yes, an actual medicine was approved with AI’s help.

Welcome back apprentices! 👋
Ever read the back of a pill bottle and thought, “Who approved this… a robot?”
Well, hold onto your vitamins — because that punchline might be aging into reality.
The FDA is now testing AI tools to help review new drugs. Why? Because a single drug submission can include over 10,000 pages of clinical data — that’s the literary equivalent of reading War and Peace 19 times... in one sitting.
So yes, the red tape may finally be getting a software update. And if AI starts helping decide what medicine hits the shelves, your next prescription might just come with a “Powered by GPT” label.
In today's email
Google’s Blueprint for American Innovation
Netflix’s New AI Knows What You Want to Watch
These AI Agents Read 175 Million Papers So You Don’t Have To
The FDA pops a fresh pill
OpenAI Hits the Brakes on Full Capitalism
Even more AI magic
Test the prompt
Read Time: 4 minutes
Quick News
⚡ Google just dropped a policy report calling on the U.S. to go big on AI, energy infrastructure, and talent — because innovation can’t run on dial-up and outdated immigration laws. With federal R&D spending down from 1.86% of GDP in 1964 to just 0.63% in 2022, and China projected to outspend the U.S. in R&D by 30% by 2030, the stakes are high. Google itself invested $49B in R&D last year and is backing initiatives like training 100,000 electricians to power up America’s AI future. TL;DR: if the U.S. wants to stay on top, it needs less red tape, more GPU racks, and a serious caffeine boost to its innovation policy.
🎬 Netflix is testing a ChatGPT-powered search tool that finally lets you type what you're actually thinking—like “funny but not dumb” or “thriller, but I want to sleep tonight.” Rolling out in beta to iOS users this week, it's designed to end the soul-crushing scroll through 17,000+ titles. Already live in Australia and New Zealand, the tool is Netflix’s latest move to turn AI into your personal couch buddy — and maybe the first search bar that gets you.
🧠 FutureHouse, a nonprofit backed by Eric Schmidt (ex-Google CEO) — just launched a squad of AI research agents that make PhDs look like they're still figuring out how to Google properly. Meet Crow, Falcon, Owl, and Phoenix: not Marvel superheroes, but AI models trained on 175 million+ academic papers, patents, and clinical trials. They’ve already outperformed human researchers in finding, summarizing, and even generating new scientific ideas.
Together with HoneyBook
There's nothing artificial about this intelligence
Meet HoneyBook—the AI-powered platform here to make every client relationship more productive and prosperous.
With HoneyBook, you can attract leads, manage clients, book meetings, sign contracts, and get paid.
Plus, HoneyBook AI tool summarizes project details, generates email drafts, takes meeting notes, predicts high-value leads, and more.
FDA
The FDA pops a fresh pill — it's made of code.

The FDA is in hush-hush talks with OpenAI to see if AI like ChatGPT can help evaluate new drugs. Their experiment-in-progress, adorably named “cderGPT”, is already being tested to speed up drug approvals by summarizing clinical trials, detecting inconsistencies, and maybe saving us all from drowning in regulatory paperwork.
One drug review has already been completed with AI assistance — and yes, Elon Musk’s satirical government unit DOGE is somehow involved (because of course it is).
So… Is This the Future of Medicine or a Sci-Fi Plot Twist?
Let’s break it down. The idea isn’t to let a chatbot play doctor — it’s to help overworked scientists cut through mountains of data faster and more accurately. That means fewer bottlenecks, potentially cheaper drug development, and a faster route to market for breakthrough treatments.
Here’s what’s cooking:
Pros:
Drug approvals could speed up by 30–40% — a big deal in a space where approvals can take 10+ years and cost $2.6 billion per drug.
Smaller pharma companies might finally compete without needing a legal army and 5,000 Post-it notes.
The FDA could reallocate human reviewers to focus on complex edge cases instead of spreadsheet wrangling.
Cons:
AI can hallucinate. And hallucinations are… a lot less charming when they involve clinical trial data instead of Reddit threads.
The FDA has released draft guidelines on using AI in drug reviews—but nothing’s set in stone yet. So while we’re not flying blind, we’re definitely still taxiing on the runway of transparency and trust.
If the model gets it wrong, who takes the fall? The FDA? OpenAI? Or the intern who accidentally hit “generate” instead of “save as draft”?
Bottom line: The tech is promising, but the framework around it needs to grow up — fast.
What’s the Actual Deal?
This isn’t a “someday” thing. The FDA has already completed its first AI-assisted drug review — and more are on the way.
With over 100,000 pages of clinical data per submission, AI isn’t just useful — it’s borderline heroic. If this momentum holds, we’re looking at a future where AI cuts years off drug development, levels the playing field for health-tech underdogs, and finally puts Big Pharma’s gatekeeping on a diet.
Oh, and by the way — the FDA just let AI help decide what goes into your bloodstream. Meanwhile, you're still nervous about letting it write a follow-up email. If it can co-sign your next prescription, it can probably handle your Tuesday spreadsheet.
🧬 Would You Trust AI to Help Approve Your Meds? |
Poll results will drop on Monday!
OpenAI
OpenAI Hits the Brakes on Full Capitalism — Goes Benefit Corp Instead

OpenAI is officially pivoting from its for-profit trajectory, restructuring its business arm into a Public Benefit Corporation (PBC) — a model used by mission-first giants like Patagonia and Anthropic.
The nonprofit parent will now regain governance control of the new PBC, reversing plans to pursue unrestricted profit-seeking. CEO Sam Altman says the shift will still allow OpenAI to raise “trillions” (yes, with a T) to pursue beneficial AGI — just with ethics stitched into the hoodie.
What Triggered the Shift?
Here’s what made OpenAI backpedal on its IPO dreams and realign with its original halo:
Mounting pressure: Civic groups, ex-staff, and even Elon Musk’s lawsuit dragged the nonprofit-vs-profit dilemma into public court (and court court).
Brand tension: “Nonprofit AI lab” and “$90B valuation” were increasingly hard to reconcile in one press release.
Mission vs. money: The founding nonprofit will now serve as a governance anchor, reasserting control and ensuring AI is aligned with OpenAI’s stated global benefit mission.
Strategic repositioning: A PBC structure enables ethical oversight and fundraising muscle — without fully selling out to the "move fast, break everything" crowd.
Stakeholder signals: This move may spook investors counting on pure-for-profit returns — but it’s catnip for regulators and researchers rooting for safer AGI.
So… What’s the Deal?
OpenAI is trying to thread the world’s hardest business needle: raise capital like a tech unicorn, govern like a nonprofit, and build godlike AI that doesn’t turn your toaster into a propaganda machine. The Public Benefit Corporation model lets it attract massive funding while publicly committing to societal goals — just like Anthropic, but with a much bigger spotlight and lawsuit backlog.
The company now walks a tightrope between AGI ambition and governance credibility, signaling to the world that the “alignment problem” isn’t just for the models — it’s for the humans running them, too.
Help Your Friends Level Up! 🔥
Hey, you didn’t get all this info for nothing — share it! If you know someone who’s diving into AI, help them stay in the loop with this week’s updates.
Sharing is a win-win! Send this to a friend who’s all about tech, and you’ll win a little surprise 👀
Even Quicker News
📚 250+ tech CEOs — including Microsoft, Adobe, and Uber — are urging U.S. schools to make AI and coding mandatory, citing an 8% wage boost from just one CS class. With a new White House task force backing the push, the goal is clear: graduate AI creators, not just app scrollers.
🤖 Apple’s teaming up with Anthropic to test a new “vibe-coding” platform that lets AI write, edit, and test code inside Xcode — basically turning Claude Sonnet into your team’s chillest new dev. It’s still in-house for now, but if this goes public, your next engineer might run on silicon and serotonin.
⚖️ Robby Starbuck is suing Meta after its AI allegedly accused him of everything short of kicking puppies — including storming the Capitol and denying the Holocaust (none of which he did). Meta’s fix? It wiped his name like a bad Tinder match.
Seeking impartial news? Meet 1440.
Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.
Today’s Toolbox
🦜 NVIDIA has launched Parakeet TDT 0.6B V2, a lightning-fast, large-scale English ASR model that can chew through 24-minute audio chunks in a single go — punctuation, capitalization, and timestamps included. Clocking in at 600 million parameters and an RTFx score of 3380 (a.k.a. “blink and it's done”), it’s built to tackle everything from podcasts and sales calls to karaoke-worthy lyrics. It’s optimized for real-world noise, handles accents, and yes, it knows when to insert a comma.
Why This Bird Is a Big Deal
For small biz owners, execs, and AI-savvy teams, here’s why you should care:
Enterprise-ready: Ideal for building voice assistants, transcribers, analytics dashboards, and more.
Built for speed & scale: Trained on 120,000 hours of audio, it eats up large audio files without choking.
Precision features: Delivers word-level timestamps — great for searchability, subtitling, and legal logs.
Noise-tolerant: Performs consistently even with music, background noise, or your intern’s lunch break TikToks.
Plug-n-play: Integrates easily with NVIDIA NeMo, supports .wav and .flac, and runs on most modern NVIDIA GPUs.
If your company touches voice in any form — think call centers, content creation, field notes, or even legal dictation — this model brings industrial-grade transcription into your hands without needing to hire a team of caffeinated interns. It’s accurate, scalable, and doesn’t complain about background music or thick accents.
Stop manually transcribing interviews or sales calls. Hook this model into your workflow, auto-generate searchable logs, and let your team focus on insights—not rewinding audio at 0.5x speed.
🧪 Test the Prompt
A playground for your imagination (and low-key prompt skills).
Each send, we give you a customizable DALL·E prompt inspired by a real-world use case — something that could help you in your business or job if you wanted to use it that way. But it’s also just a fun creative experiment.
You tweak it, run it, and send us your favorite. We pick one winner to feature in the next issue.
Bonus: you’re secretly getting better at prompt design. 🤫
👑 The winner is…
Last week, we challenged you to test GPT-4o’s visual generation skills with this prompt.

Congrats to Ben from San Jose!🥳
Want to be featured next? Keep those generations coming!
🎨 Prompt: The Thought Loom
“Design a towering vertical machine that weaves a person’s passing thoughts into an endless flowing tapestry. The loom’s frame is built from [unusual rigid material], while the threads are made of [intangible concept or sensory input]. Ghostlike weavers made of translucent mesh guide the threads into place, forming a glowing pattern that shifts with emotion. The tapestry cascades downward, vanishing into clouds.
Style: high-art concept illustration, vertical perspective, silk textures, soft light refractions, awe-inspiring scale.”
We’ll be featuring the best generations in our next newsletter!
📊 Poll Results: Scanning Your Eyeballs
The majority of you (43%) said “yes” to eyeball scans for crypto and password-free living.
Close behind? (36%) The “maybe” crowd, waiting for better UX and some reassurance.
Only a few (9%) said “no way”, and some more (12%) thought it was a Black Mirror rerun.
The future’s blurry — but you’re mostly curious!
FEEDBACK
How was today's everydAI? |
DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. Please be careful and do your own research.