Physical AI in the Surgical Theater: When Robots Truly Understand Surgery

In July 2025, a surgical robot at Johns Hopkins University accomplished something that would have seemed like science fiction just a few years ago: it performed a lengthy phase of gallbladder removal without any human intervention. But this wasn't just another incremental advance in robotic surgery. The Surgical Robot Transformer-Hierarchy (SRT-H) didn't simply execute preprogrammed movements—it responded to voice commands, self-corrected when encountering unexpected scenarios, and adapted to individual anatomical features in real-time.

"This represents a fundamental shift from robots that execute specific tasks to robots that truly understand surgical procedures," the research team noted. Built using the same machine learning architecture as ChatGPT and trained on videos of actual surgeries, SRT-H marks medicine's entrance into the era of physical AI—systems that don't just process information but interact intelligently with the physical world.

For medical device leaders, this milestone signals more than an evolution in surgical robotics. It represents a transformation in how we must think about device development, clinical validation, and the very nature of surgical innovation itself.

From Task Execution to Surgical Understanding

The distinction between traditional surgical robots and physical AI systems like SRT-H isn't merely technical—it's conceptual. Previous autonomous surgical systems, like the 2022 STAR (Smart Tissue Autonomous Robot), performed impressive feats but required highly controlled environments and marked tissue. They were, in essence, sophisticated automation.

Physical AI operates differently. These systems build internal models of surgical procedures, understand context, and make real-time decisions based on what they observe. When a surgeon tells SRT-H to "grab the gallbladder head," the system interprets the command within its understanding of the procedure, identifies the relevant anatomy, and executes the action while continuously monitoring for complications.

This shift has profound implications for device development. As our recent PatchClamp collaboration demonstrated, successful surgical innovation emerges from deep understanding of surgical workflow and human factors. Physical AI doesn't eliminate this requirement—it amplifies it. Devices must now integrate with systems that actively interpret the surgical field rather than simply providing fixed functionality.

The Infrastructure Behind the Intelligence

While autonomous gallbladder surgery captures headlines, the more consequential story for device innovators lies in the infrastructure enabling these breakthroughs. NVIDIA's partnerships across healthcare, announced at the January 2025 J.P. Morgan Healthcare Conference, reveal the computational foundation being built beneath surgical AI.

Mayo Clinic's deployment of NVIDIA's DGX Blackwell SuperPOD infrastructure on July 28, 2025, provides a concrete example. The system reduces pathology analysis time from four weeks to one week while processing Mayo's repository of 20 million digitized whole-slide images and 10 million patient records. The Atlas foundation model, trained on 1.2 million histopathology images, now operates across pathomics, drug discovery, and precision medicine applications.

This isn't just faster computing—it's a qualitative change in what's computationally feasible in clinical settings. The implications extend beyond imaging. When robotic systems can process and respond to surgical situations in real-time, the devices interfacing with those systems must meet new performance standards.

Digital Twins: Training Grounds for Physical AI

Perhaps the most transformative application of physical AI in surgery isn't happening in operating rooms at all—it's occurring in digital simulations. The concept of "digital twins" has evolved from engineering visualization tools into comprehensive training environments where AI systems can log more surgical experience than a human could accumulate in multiple lifetimes.

As NVIDIA's healthcare leadership has noted, "Our AI will be trained, tested, and enhanced in the digital world more than a person could ever be in the real world. Surgeons can now train in simulation. Just as they train autonomous vehicles to drive, surgeons will inevitably be trained using augmented intelligence."

The Medical World Model research published in June 2025 demonstrated proof of concept for simulation-based treatment planning, showing a 13% improvement in treatment selection. While this remains early academic research, it validates a fundamental shift: surgical AI can rehearse procedures, explore edge cases, and refine approaches in simulated environments before ever entering an operating room.

For device developers, this creates both opportunity and complexity. Products must perform reliably not just in physical testing but within digital twin environments where AI systems train and validate their surgical understanding. The testing frameworks we've relied upon for decades may need substantial evolution.

The Competitive Landscape Intensifies

The surgical robotics market isn't waiting for this future to arrive—it's actively building it. Intuitive Surgical's da Vinci 5 system, which received FDA clearance on March 14, 2024, and began broader commercial rollout in 2025, features 10,000 times more computational power than first-generation systems. This isn't simply about better graphics or faster processing—it's about enabling the kind of real-time AI analysis that physical AI systems require.

Johnson & Johnson's Monarch Quest platform received FDA 510(k) clearance in March 2025 following integration of NVIDIA's RTX platform, which delivered a 260% increase in computational power for real-time AI navigation in robotic-assisted bronchoscopy. The platform reduces time to treatment by an average of three weeks compared to traditional bronchoscopy approaches—a clinically meaningful outcome that demonstrates AI's practical value beyond technological sophistication.

Meanwhile, J&J's OTTAVA system completed its first clinical cases in April 2025 at Memorial Hermann-Texas Medical Center, targeting de novo authorization for multiple general surgery procedures. CMR Surgical's Versius platform, which received FDA marketing authorization in October 2024, has completed over 30,000 procedures internationally and established itself as the second most popular soft-tissue surgical robot worldwide.

Each of these systems represents a different bet on how AI will integrate into surgical practice. For device developers, the diversity of platforms creates both fragmentation challenges and opportunities for strategic partnerships.

Beyond the Operating Room: Physical AI in Healthcare Delivery

Surgical robotics may capture the spotlight, but physical AI's impact extends throughout healthcare delivery. Foxconn's Nurabot nursing robot, currently in pilot deployment, demonstrates 20-30% workload reduction with commercial launch planned for 2026. These systems don't just automate tasks—they navigate complex clinical environments, interpret patient needs, and coordinate with human staff.

The integration challenges here mirror those in surgical robotics. When AI systems actively participate in care delivery, medical devices must communicate not just with human users but with intelligent systems that interpret clinical context. The protocols and interfaces we've standardized for human operation may need fundamental rethinking.

Healthcare organizations are choosing third-party vendor partnerships at unprecedented rates—61% opt for partnerships over in-house development (20%) or off-the-shelf solutions (19%). This partnership model dominates because specialized AI companies require health system partnerships for validation while health systems lack resources to build independently.

For device companies, this suggests that go-to-market strategies must account for increasingly complex integration ecosystems where AI platforms, healthcare systems, and medical devices must work in concert.

What This Means for Device Development

The transition to physical AI in surgery creates several strategic imperatives for device innovators:

Rethink testing and validation frameworks. When devices interact with AI systems that adapt and learn, traditional fixed-condition testing becomes insufficient. Validation must account for how devices perform across the range of behaviors AI systems might exhibit—including edge cases that emerge only after extended real-world deployment.

Design for integration from the start. The days of designing surgical devices in isolation are ending. Physical AI systems become partners in procedures, and devices must communicate effectively with these intelligent systems. This requires new thinking about interfaces, data exchange, and interoperability.

Invest in simulation capabilities. As digital twins become standard training environments for surgical AI, devices must demonstrate reliable performance in simulated environments. This may require new partnerships with simulation platform providers and investments in digital representations of physical devices.

Consider the regulatory landscape. The FDA has authorized over 1,000 AI-enabled medical devices, with 221 devices authorized in 2024 alone. However, research shows that only 1.3% were supported by randomized controlled trial evidence, and 43% of recalls occurred within one year of authorization. The regulatory pathway exists, but the evidence requirements remain in flux.

Build strategic flexibility into development roadmaps. With multiple competing robotic platforms, diverse AI architectures, and rapidly evolving clinical practices, device development strategies must maintain flexibility. The winning approach five years from now may look substantially different from today's leading solutions.

The Partnership Imperative

The complexity of physical AI integration suggests that traditional device development models may need evolution. Our experience developing devices like PatchClamp—where clinical insight must translate into engineered reality through iterative collaboration—becomes even more critical when AI systems actively participate in procedures.

Successful innovation in this environment requires partnerships that bring together clinical expertise, AI/ML capabilities, regulatory strategy, and deep device development experience. The days of purely sequential development—where clinical needs define requirements, engineers build solutions, and commercial teams bring products to market—are giving way to more integrated, iterative approaches.

This doesn't diminish the importance of core device innovation. If anything, it amplifies it. Physical AI systems need sophisticated devices to interact with. But those devices must be conceived, from the earliest stages, as participants in AI-augmented surgical ecosystems.

Looking Forward

The Johns Hopkins autonomous surgery breakthrough will be remembered as a milestone, but it's more accurately understood as a signpost. We're entering an era where surgical intelligence exists not only in the minds of trained surgeons but in the AI systems that assist, augment, and—increasingly—autonomously perform aspects of procedures.

For device leaders, this transition presents both challenges and extraordinary opportunities. The companies that succeed will be those that recognize physical AI not as a threat to traditional device innovation but as a catalyst for rethinking what surgical devices can accomplish.

The surgical theater of tomorrow won't simply feature better tools—it will be an environment where human expertise, physical AI systems, and thoughtfully designed devices work in concert to achieve outcomes that none could accomplish alone. The question for device innovators isn't whether this future will arrive, but whether they're positioning their organizations to help build it.

At Product Creation Studio, we partner with medical device innovators to navigate the complex intersection of clinical insight, engineering excellence, and emerging technologies. From early-stage concept validation through regulatory approval, our team brings over 20 years of experience transforming breakthrough ideas into market-ready devices. To explore how we can support your next surgical innovation, visit productcreationstudio.com or connect with our team.


Product Creation Studio