Apple ARKit 2.0 – a closer look
At the beginning of June Apple announced ARKit version 2.0 and with it a couple of very exciting features such as persistent and shared experiences, 3D object detection and the new USDZ file format that can share AR objects between different apps within the IOS ecosystem. Lets have a closer look.
ARCore versus ARKit
About a month ago Google announced version 1.2 of their ARCore platform with new features like cloud anchors, vertical plane detection, image detection and Sceneform which helps developers who do not have experience working with 3D environments to better create AR applications.
It seems that every new update from both platforms brings some new features, as well as introducing features that were previously only offered on the competing platform.
While both platforms offer a similar feature set, there are some slight differences in the services offered. Take for instance a new feature that both parties recently announced: the ability to share an AR session between users on different devices. ARKit calls the ability to share AR sessions “Shared experiences”, while ARCore calls it “Cloud anchors”. Besides just the different names both solutions use a different approach to implementing this feature.
ARCores approach is to send a user’s raw visual mapping data to their cloud services and then process it server side to create and store a point cloud. Because there is a server being used to store and process all the visual mapping the service works cross platform between Android and IOS devices.
Being able to cross platform share AR sessions is a very practical feature. However, have mapping data stored in the cloud has rightly sparked some discussion about the potential privacy issues involved with this approach. According to Google sent visual mapping information will be deleted from the server after seven days and stored point cloud data will never be sent directly to a user’s device. It will be interesting to see if users will be comfortable with this approach when it becomes more widespread.
Being able to cross platform share AR sessions is a very practical feature. However, having mapping data stored in the cloud has rightly sparked some discussion about the potential privacy issues involved with this approach. According to Google sent visual mapping information will be deleted from the server after seven days and the stored point cloud data will never be sent directly to a user’s device. It will be interesting to see if users will be comfortable with this approach when the use of this feature becomes more widespread.
Meanwhile ARKit stores the spatial mapping state and uses a peer-to-peer connection that is setup between local devices to share the spatial mapping data between users. The devices will try to match the shared data with each other, if this succeeds then the placed objects are synchronized between devices and subsequently anchored at their original real world position. This approach mostly eliminates the privacy problem Google’s approach is facing, but brings with it a potential new problem. Apple’s approach with the peer-to-peer connection could potentially lead to users having their sessions terminated in the event of a temporary lack of connection.
It is interesting to see both companies take such a different approach to similar features. But its a fortunate time for the AR field as a whole, as both parties are constantly competing to roll out new features at an astonishingly fast pace.
Hopefully, in the near future, I will write a more in-depth comparison of the current features offered by both platforms. I am personally looking forward to what the coming months and years will bring in terms of AR technology.