Ahead of Apple Intelligence launch, Apple is voluntarily implementing AI safety guidelines set by the Biden Administration.
Safety First – Apple Implements AI Safety Guidelines Set by the Biden Administration Ahead of Apple Intelligence Launch
At the time of writing, Apple Intelligence hasn’t been released in any form and Apple is implementing the safety guidelines voluntarily though it is enforceable by law, according to Bloomberg. The reason for this is, the AI safety guidelines haven’t gone through Congress and isn’t a law, yet. When it does turn into law, AI will be properly regulated in the United States.
Apple isn’t the only company that is implementing these safety guidelines. Microsoft, Google, OpenAI, Meta and other companies are already onboard and more will join in. These guidelines are important as they ensure safety of everyone that has access to AI. Avoiding things like user discrimination and threat to national security from using AI are at the top of the list. In short, these are extremely important set of rules that ensure safety of everyone and everything when it comes to using AI.
Apple released iOS 18 beta 4 and macOS Sequoia beta 4 this week. Though these updates did come with a handful of changes, but they lack Apple Intelligence, a feature that was thoroughly demoed by Apple at WWDC 2024. However, it was later revealed Apple has invited developers to Cupertino for a quick recap of Apple’s AI. This could mean anything, really. Either Apple wants everyone to stay interested in its new AI-driven journey, or we’re extremely close to the beta launch of Apple Intelligence.
I’m leaning more towards the latter.
So far, Apple hasn’t given us any official date regarding its launch. But, if I’m to take an educated guess given how things are unfolding around us, there’s a chance Apple may launch its AI within a couple of iOS 18 betas.