Key Takeaways from WWDC25: ML, AI, and SwiftUI Labs
June 13, 2025 | by Noah Moller

WWDC25 was a massive event for developers interested in Apple’s approach to AI, machine learning, and the future of app design. I spent the week bouncing between the ML & AI and SwiftUI labs, and left with a notebook full of insights, best practices, and a sense of where Apple is steering the developer ecosystem. Here’s my highlights
Apple Intelligence & Foundation Models: What’s New
Apple is going all-in on on-device intelligence. The new Foundation Models framework gives developers direct access to Apple’s own LLMs – the same ones powering Apple Intelligence features systemwide. Here’s what stood out:
- On-Device First: Before reaching for custom CoreML models, check what’s possible with Foundation Models, Vision, and Speech frameworks. Apple’s stack is now surprisingly robust for many use cases.
- Foundation Models Framework: You can now build features like summarization, document extraction, translation, and more right into your app, with just a few lines of Swift. The models run on-device for privacy and speed, and you don’t pay API fees.
- Context Window: The on-device model supports a 4096-token context window (input + output combined). It’s not GPT-4o-sized, but plenty for most mobile tasks.
- Model Updates: Foundation Models get updated in sync with OS releases, so your app can ride the wave of improvements—just make sure to test with “golden prompts” and responses to catch regressions.
- Adapter Training: There’s an Adapter-Training-Toolkit for customizing models (think LoRA for Apple), but you’ll need to retrain adapters when the base model updates.
- No Built-in RAG—Yet: Retrieval-augmented generation (RAG) isn’t built-in, so you’ll need to roll your own if you want to combine search with generation.
- Vision & Speech: Vision’s document recognition now extracts both text and structure—great for receipts, labels, or forms. SpeechAnalysis gets a boost for on-device transcription and sentiment.
- MLX: Apple’s new MLX framework (Python, C++, Swift) is a PyTorch alternative optimized for Apple silicon and supports training/fine-tuning state-of-the-art models locally.
- Tool Use: Foundation Models can call tools (e.g., APIs for weather, contacts, news), making your app’s AI features more current and context-aware.
- Multilingual by Default: The models support 15 languages out of the box. Prompt in your preferred language, or use the locale API for system language detection.
- Performance & Rate Limits: Speed is dynamic (speculative decoding, constraint decoding), and there are rate limits—especially in the background—so plan your UX accordingly.
Bottom line: Apple’s on-device models aren’t as big as OpenAI’s, but they’re free, private, and easy to integrate. For most apps, this is a game-changer.
SwiftUI: Liquid Glass, Performance, and Best Practices
Design got a major glow-up this year with the new Liquid Glass system, but there’s also a ton under the hood for developers:
- Liquid Glass Everywhere: The new design language is all about fluid, translucent surfaces that respond to their environment. It’s not just for looks—Apple’s prepping everyone for spatial computing and mixed reality.
- SwiftUI Upgrades: New APIs make it easy to adopt Liquid Glass. Just build your app with Xcode 26 and most elements (tabs, toolbars, nav stacks) get the new look automatically.
- Performance Revolution: SwiftUI’s rendering pipeline is rebuilt. Lists and scrollable views are now buttery smooth, even with thousands of items. Lazy stacks and new incremental state management mean less memory, fewer bugs, and way less jank.
- Best Practices (2025 Edition):
- Keep views small and modular; use models for business logic.
- Use @StateObject for owned reference types, @ObservedObject for external ones.
- Prefer LazyVStack/LazyHStack for long lists.
- Avoid AnyView unless absolutely necessary.
- Use ContainerValues, PreferenceKeys, or closures to pass data up the view hierarchy—not bindings.
- Embrace Observable for architecture; it’s more efficient than ObservableObject.
- Debug with let _ = Self._printChanges() and conditional breakpoints.
- Accessibility & Animations: Animations are now supported in widgets on visionOS, and accessibility is easier to get right with new APIs.
- Glass in UI: Avoid glass-on-glass layering and glass in scrolling content unless you want to draw attention. Always use the proper container.
- WebView & Attributed Strings: SwiftUI finally gets a first-class WebView and rich text editing in TextEditor.
Real-World Impact
What does all this mean for developers?
- AI Features for Free: You can now add summarization, translation, and even basic image analysis to your app without worrying about API costs or user privacy.
- Design Once, Ship Everywhere: The unified design language and new SwiftUI APIs mean your app will look and feel modern on iOS, macOS, watchOS, and visionOS with minimal tweaks.
- Performance Worries? Gone: The new SwiftUI rendering and memory improvements mean your app can handle more data, more animations, and more users—without the old headaches.
- Prep for the Future: Apple is laying the groundwork for spatial computing and privacy-first AI. Jump in early, and your app will be ready for whatever comes next.
Final Thoughts
WWDC25 feels like a turning point. Apple’s making it easier (and more fun) to build smart, beautiful, and performant apps no massive ML infrastructure required. If you haven’t already, grab Xcode 26, spin up a playground, and start experimenting. The future of Apple development is here, and it’s looking seriously cool.
RELATED POSTS
View all