Pity but I am unable to capture model's texture in realtime using the LiDAR scanning process (at WWDC21 Apple didn't announce API for that). However, there's good news – a new methodology has emerged at last. It will allow developers to create textured models from a series of shots.
Photogrammetry
Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry
tool. At the output we get USDZ model with corresponding texture. To implement Object Capture API
you need Xcode 13, iOS 15 and macOS 12.
Let me share some tips on how to capture photos of a high quality:
- Lighting conditions must be appropriate
- Use soft light with soft (not harsh) shadows
- Adjacent images must have a 75% overlap
- Do not use autofocus
- Images with RGB + Depth channels are preferable
- Images with gravity data are preferable
- Higher resolution and RAW images are preferable
- Do not capture moving objects
- Do not capture reflective or refractive objects
- Do not capture objects with specular highlights
Technically, iPhone is capable of storing multiple channels as visual data, and data from any iOS sensor as metadata. In other words, we can should implement digital compositing techniques. We must store for each shot the following channels – RGB, Alpha (segmentation), Depth data with its Confidence, Disparity, etc, and useful data from a digital compass. The depth channel can be taken from LiDAR (where the precise distance is in meters), or from two RGB cameras (disparity channels are of mediocre quality). We are able to save all this data in OpenEXR file or in Apple's double-four-channels JPEG. Depth data must be 32bit.
Here's an Apple sample app where a capturing approach was implemented.
To create a USDZ model from a series of captured images, submit these images to RealityKit using a PhotogrammetrySession
.
Here's a code snippet that spills a light on this process:
import RealityKit
import Combine
let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")
let url = URL(fileURLWithPath: "model.usdz")
var request = PhotogrammetrySession.Request.modelFile(url: url,
detail: .medium)
var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .normal
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .normal
configuration.isObjectMaskingEnabled = false
guard let session = try PhotogrammetrySession(input: pathToImages,
configuration: configuration)
else { return ?}
var subscriptions = Set<AnyCancellable>()
session.output.receive(on: DispatchQueue.global())
.sink(receiveCompletion: { _ in
// errors
}, receiveValue: { _ in
// output
})
.store(in: &subscriptions)
session.process(requests: [request])
A complete version of code allowing you create a USDZ model from series of shots can be found inside this sample app.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…