Tim Field Profile Banner
Tim Field Profile
Tim Field

@nobbis

5,609
Followers
158
Following
102
Media
795
Statuses

Building @Metascan3D • Founder @AboundLabs • Prev robotics at Willow Garage, hedge fund quant in London/NYC • From NZ

South Lake Tahoe
Joined May 2008
Don't wanna be here? Send us removal request.
Pinned Tweet
@nobbis
Tim Field
9 months
NeRFs had a good run. But 3DGS (Gaussian Splatting) now beats them in most respects: - higher quality - faster training - real-time renderable - initializable from SfM - easier to understand/implement - composable w. traditional pipelines - smaller, interpretable representation
19
64
621
@nobbis
Tim Field
6 years
Real-time photogrammetry with #ARKit
122
2K
7K
@nobbis
Tim Field
4 years
iOS 14's point cloud sample app has improved a lot in beta 4.
68
951
6K
@nobbis
Tim Field
4 years
First look at the iPad Pro LiDAR Scanner
100
1K
6K
@nobbis
Tim Field
4 years
Starting to add LiDAR support to the Abound SDK
26
731
3K
@nobbis
Tim Field
6 years
Multi-user spatial mapping with #ARKit
29
968
3K
@nobbis
Tim Field
4 years
iOS 14 continues to improve LiDAR quality. Beta 5 adds new "smoothedSceneDepth" API.
28
426
2K
@nobbis
Tim Field
6 years
Spatial mapping with #ARKit
37
588
2K
@nobbis
Tim Field
5 years
People segmentation & depth using #ARKit3
39
463
2K
@nobbis
Tim Field
4 years
ARKit 4 Depth API + Scene Meshing
13
348
1K
@nobbis
Tim Field
2 years
In-depth article from Apple Research laying out technical details of the new RoomPlan API in iOS 16.
18
150
1K
@nobbis
Tim Field
5 years
Spatial mapping SDK now in beta. Sign up today and add spatial computing to your iOS app in 15 minutes.
25
246
872
@nobbis
Tim Field
6 years
Real-time surface normals for AR with #DeepLearning
16
197
814
@nobbis
Tim Field
4 years
ARKit 4 Depth API for LiDAR Scanner
8
179
833
@nobbis
Tim Field
6 years
Microsoft's first demo of upcoming 4th gen #Kinect depth camera
6
207
589
@nobbis
Tim Field
5 years
Google published research last week at SIGGRAPH Asia which shows occlusion will be solved in #ARCore in 2019. More robust than Niantic's ML approach or Facebook's feature point approach.
Tweet media one
5
156
533
@nobbis
Tim Field
2 years
Apple's RoomPlan API coming to iOS 16. Lets developers add floorplan capture + furniture detection to Swift apps.
9
101
527
@nobbis
Tim Field
8 months
Gaussian Splatting now 10x faster than NeRF for image-to-3D and text-to-3D: "DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation" (same author as Stable-Dreamfusion)
10
91
497
@nobbis
Tim Field
6 years
New in iOS 11.3 - vertical & irregularly shaped surfaces with #ARKit
8
170
418
@nobbis
Tim Field
4 years
iOS ARKit Scene Reconstruction API released: - new ARMeshAnchor with vertices, faces, classification - per-face segmentation: ceiling, door, floor, seat, table, wall, window - no vertex color or texturing support - no access to raw camera output (depth maps) - 4th gen iPad only
Tweet media one
10
126
426
@nobbis
Tim Field
5 years
A demo from my last talk: occlusion, physics, and shadows for mobile AR using spatial mapping from @aboundlabs .
5
66
405
@nobbis
Tim Field
5 years
Had a great time speaking at #ARKitNYC last night and demoing new features coming to the Abound SDK, e.g. 3D object localization (originally built for an enterprise customer in hospitality.)
11
83
379
@nobbis
Tim Field
2 years
First tests integrating #MobileNeRF into @Metascan3D
6
45
324
@nobbis
Tim Field
8 months
Testing Gaussian Splatting composited with regular 3D in our web viewer.
6
18
227
@nobbis
Tim Field
4 years
ARKit 4 depth maps (256x192 px)
5
44
210
@nobbis
Tim Field
6 years
#ARKit 2.0 auto-generates environment cube map textures from the camera during the AR session, i.e. builds 360° panorama images to enable realistic image-based lighting for virtual objects. Apple has leapfrogged Google in AR.
13
60
200
@nobbis
Tim Field
2 years
I built a MobileNeRF desktop viewer. Lots of potential here for 3D capture of difficult objects & scenes.
1
26
210
@nobbis
Tim Field
6 years
2014: Cloud-based depth camera capture - before it got hard (when Apple acquired PrimeSense)
3
38
183
@nobbis
Tim Field
4 years
Try it yourself: Here was beta 1:
@nobbis
Tim Field
4 years
ARKit 4 Depth API for LiDAR Scanner
8
179
833
3
12
177
@nobbis
Tim Field
3 years
Reality to web in 2 mins @forgecam
4
20
173
@nobbis
Tim Field
5 years
Spring in NYC. Finally warm enough for some outside spatial mapping.
@aboundlabs
Abound
5 years
Abound SDK 0.4.6 released - • Draws grid overlaid on mesh (see below) • 30% faster meshing Thanks to Stykka Labs () for help testing.
7
58
263
6
14
165
@nobbis
Tim Field
2 years
ARKit’s the best VIO system, says new study.
Tweet media one
6
35
163
@nobbis
Tim Field
6 years
>6 meter range (with <1cm error at 4 meters), uses <1W power, and sees bouncing ping pong balls:
Tweet media one
10
52
133
@nobbis
Tim Field
3 years
Trying out Photo Mode (coming soon to @forgecam ) with WebXR on an Oculus Quest.
6
17
123
@nobbis
Tim Field
11 months
Object Capture coming to iOS 17:
Tweet media one
2
30
118
@nobbis
Tim Field
2 years
Apple's delivering for 3D capture this year: - ARKit: captureHighResolutionFrame, 4K video - Object Capture: point cloud output (ideal for registration, NeRFs) - RoomPlan: parametric room capture with LiDAR - MapKit: Look Around API access
Tweet media one
0
21
114
@nobbis
Tim Field
4 years
ARKit 4 adds Depth API to "access even more precise depth information" for the LiDAR Scanner and Location Anchors to "pin AR experiences to a specific point in the world."
3
11
100
@nobbis
Tim Field
2 years
iOS 15.4 beta adds AVFoundation support for the LiDAR Sensor (registered to a color camera.) Pros: more configurable, less overhead than ARKit. Excited to leverage this in @Metascan3D .
Tweet media one
2
17
102
@nobbis
Tim Field
5 years
Making sure I can never get lost in my local bodega.
@aboundlabs
Abound
5 years
Abound SDK 0.4.9 released - • Increased texture resolution by 2x • Reconstruction optimizations (GPU usage down 10%) • Improved final mesh clean-up Thanks to @GetGolfScope for help testing.
9
56
250
3
24
88
@nobbis
Tim Field
3 years
Toying around with @forgecam VR support. Added an "Enter VR" button for scans shared to the web.
3
17
85
@nobbis
Tim Field
2 years
Awesome research showing interactive rendering of NeRFs on mobile in a web browser. Try it out:
@taiyasaki
Andrea Tagliasacchi 🇨🇦🏔️
2 years
📢📢📢 Thrilled to introduce "𝐌𝐨𝐛𝐢𝐥𝐞𝐍𝐞𝐑𝐅: exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures" → with 𝐥𝐢𝐯𝐞 𝐝𝐞𝐦𝐨𝐬 (Internship project lead by @ZhiqinChen3 )
7
405
2K
1
19
80
@nobbis
Tim Field
2 years
DreamFusion: Text-to-3D. Generates a NeRF from scratch using a prompt, just using a 2D text-to-image model. Groundbreaking research from Google.
3
15
81
@nobbis
Tim Field
2 years
Tonkotsu ramen and karaage from Marafuku (East Village, NYC) @Metascan3D
1
21
79
@nobbis
Tim Field
3 years
RealityKit 2 adds PhotogrammetrySession – lets you create 3D objects from a folder of photographs on macOS. No iOS support, but will make implementing server-side 3D reconstruction trivial.
5
17
78
@nobbis
Tim Field
2 years
Before LiDAR & ARKit meshing, we had real-time stereo, meshing and texturing running on an iPhone XS. Excited for iOS 16’s improved camera API flexibility, which will allow us to unify our pipeline and bring these features to @Metascan3D soon.
@aboundlabs
Abound
4 years
Testing larger scan limits with our capture app in Central Park.
50
513
3K
2
14
73
@nobbis
Tim Field
4 years
@OsFalmer Yeah, can't wait to see how every ARKit features improves with LiDAR.
@nobbis
Tim Field
5 years
People segmentation & depth using #ARKit3
39
463
2K
2
6
68
@nobbis
Tim Field
2 years
Photogrammetry pose estimation is gradient-based optimization of reprojection errors to recover camera parameters using sparse features. NeRFs are gradient-based optimization of reprojection errors to estimate the entire joint distribution of surfaces and view-dependent shading.
1
5
73
@nobbis
Tim Field
4 years
@Briggsnovo Working now to integrate it into our capture app - early results soon.
@aboundlabs
Abound
4 years
Testing larger scan limits with our capture app in Central Park.
50
513
3K
1
3
65
@nobbis
Tim Field
6 years
#ARKit 2.0 sharing and persistence is super easy-to-use: serialize sparse feature point cloud (ARWorldMap), send it to another device (or save it for later), and then initialize a new ARSession with the map.
3
16
68
@nobbis
Tim Field
3 years
Back in NYC. I see now why so many 3D scans are of graffiti. (Taken with )
4
5
64
@nobbis
Tim Field
8 months
3DGS WIP
5
5
72
@nobbis
Tim Field
4 years
Video is Apple's sample app with a 2 line change to enable smoothing. Enable in ARConfiguration.FrameSemantics - Access via ARFrame -
2
10
66
@nobbis
Tim Field
6 years
@xernobyl @CasualEffects Yes, rear camera on an iPhone 8 Plus.
2
9
60
@nobbis
Tim Field
4 years
Cool effects, but their 4 sec video clip would take your phone 5 days to compute.
@jbhuang0604
Jia-Bin Huang
4 years
Check out our #SIGGRAPH2020 paper on Consistent Video Depth Estimation. Our geometrically consistent depth enables cool video effects to a whole new level! Video: Paper: Project page:
10
212
894
1
5
58
@nobbis
Tim Field
5 years
#ARKit3 introduces Collaboration - continuously merges world maps (& anchors) on multiple devices during a multi-user AR session.
2
14
59
@nobbis
Tim Field
2 years
Testing out our new render pipeline. Captured and rendered on an iPhone with @Metascan3D
7
5
53
@nobbis
Tim Field
3 years
Early days, but look forward to getting our tech into more people's hands.
@aboundlabs
Abound
3 years
We're excited to announce Forge is now available on the App Store – . Built with our Abound SDK, Forge LiDAR 3D Scanner lets you quickly capture & share spaces in 3D using an iPhone 12 Pro or iPad Pro 2020. Please try out v1.0 and send us your feedback.
20
40
220
3
2
51
@nobbis
Tim Field
4 years
Always nice to see research make it out of the lab. Exciting work from Google.
@nobbis
Tim Field
5 years
Google published research last week at SIGGRAPH Asia which shows occlusion will be solved in #ARCore in 2019. More robust than Niantic's ML approach or Facebook's feature point approach.
Tweet media one
5
156
533
2
8
52
@nobbis
Tim Field
7 years
No world-facing depth cam on #iPhoneX ? Means mono RGB reconstruction required to take #ARKit beyond flat planes ()
2
8
53
@nobbis
Tim Field
4 years
More LiDAR setbacks: Apple blocks recording ARKit on the new iPad Pro.
Tweet media one
7
8
48
@nobbis
Tim Field
6 years
@hacktherainbow Not all in RAM. Paging data in & out of SSD means >100 hr scan possible on iPhone X (in theory)
0
1
43
@nobbis
Tim Field
5 years
iPhone 11 doubles the camera’s field of view to 120° - meaning much more robust tracking & relocalization.
1
6
48
@nobbis
Tim Field
2 years
Getting a mesh out of MobileNeRF using marching cubes doesn't make sense - there's no voxel representation. And, yes, you do get texture maps (albeit needing a custom fragment shader.) I might start writing a blog post series on 3D scanning, NeRF, etc. Could be helpful?
@cpheinrich
Chris Heinrich
2 years
@kitaedesigns Right, getting a mesh out is easyish -- you can use an adapted marching cubes. The geometry is usually not very clean though, and more importantly, you can't get the material maps (yet).
2
0
0
8
1
49
@nobbis
Tim Field
9 months
@nadirabid NeRFs are an implicit volume representation using raycasting with a warping function (for unbounded spaces) and a neural network for color. 3DGS is an explicit Gaussian representation using rasterization (with no warping) and spherical harmonics - no neural networks needed.
2
2
49
@nobbis
Tim Field
5 years
Even the latest iPhones don't support all of RealityKit's features.
Tweet media one
5
5
44
@nobbis
Tim Field
5 years
Resolution of segmentation mask and depth map is 256x192, i.e. camera image is downsampled by 7.5x.
1
6
34
@nobbis
Tim Field
9 months
You can't (yet) extract a usable mesh from 3DGS, but I bet that will change. Groundbreaking research from @Snosixtytwo , @GKopanas , Thomas Leimkühler, and George Drettakis:
2
4
42
@nobbis
Tim Field
3 years
I've been asked how to make a LiDAR scanning app. Simplest approach today: take Apple's sample app , build OpenMVS for iOS, and pass the ARMeshAnchors and subsampled images (~2 secs) into it to output a textured mesh. (Same as Displayland from Ubiquity6.)
1
2
42
@nobbis
Tim Field
2 years
Even driving at 60 km/h, ARKit is solid (see red line.) Apple’s gone the extra mile to build the most robust AR tracking system.
Tweet media one
1
7
38
@nobbis
Tim Field
5 years
Looks like it uses 2 CNN's: one segments people, then the other computes body depth maps (using a MobileNetV2 architecture.)
2
6
34
@nobbis
Tim Field
6 years
Reading #ARCore Cloud Anchor docs, Google not candid about use of data: "Raw data deleted after 7 days" ... how about aggregate derived data? "Impossible to reconstruct images from sparse point map" ... isn't 50KB feature vector -> RGB image easy problem for deep learning?
2
12
38
@nobbis
Tim Field
8 months
Thought it looked familiar: Polycam uses the web viewer @antimatter15 built 3 weeks ago. Fair game (it's MIT licensed) but open source really does help VC-backed startups move fast.
Tweet media one
2
4
40
@nobbis
Tim Field
6 years
Computer vision, AR, machine learning and beautiful UI combine seamlessly in 15 secs to solve a real-world problem. Awesome work from @rengle820 .
@Rengle820
Ryan Engle | GOLF+
6 years
Big update to @GetGolfScope in the works. Get a read without walking to the hole. #arkit #golf #putting #TigerVision Imagine this on something like @magicleap
3
12
51
1
11
32
@nobbis
Tim Field
3 years
3D reconstructions can be represented in several ways. A rough hierarchy: 1. Point cloud – less info 2. Depth maps w. camera pose 3. Triangle mesh 4. Signed distance field 5. Radiance field – more info There's no best: each has uses. But converting more -> less info is trivial.
2
1
35
@nobbis
Tim Field
3 years
With WWDC weeks away, some iOS 15 predictions: - ARKit gets 3D object detection (like Google's Objectron) - AVKit gets support for LiDAR camera (w/o ARKit) - Nearby Interaction (and ARKit?) gets AirTag support
6
0
36
@nobbis
Tim Field
1 year
Wow. Apple just released a #stablediffusion library that generates 512px images on an iPhone in <30s. Requires iPhone 12 or newer.
@atiorh
Atila
1 year
Delighted to share #stablediffusion with Core ML on Apple Silicon built on top of @huggingface diffusers! 🧵
Tweet media one
9
92
503
1
4
35
@nobbis
Tim Field
6 years
@Soranomaru Publicly, not yet - working with early access customers for now.
1
1
30
@nobbis
Tim Field
10 months
Shout out to @StabilityAI and @EMostaque for breaking the Terms of Service of Thingiverse, Sketchfab, Polycam, etc. which explicitly prohibit "mining and scraping content."
@ruoshi_liu
Ruoshi Liu
10 months
... and shout out to @StabilityAI and @EMostaque for providing computing resources!!
0
0
1
2
6
36
@nobbis
Tim Field
2 years
An embarrassment of riches.
@AIatMeta
AI at Meta
2 years
3D computer vision research just got easier! We’re releasing Implicitron, an extension of PyTorch3D that enables fast prototyping of 3D reconstruction and new-view synthesis methods based on rendering of implicit representations.
8
217
1K
1
4
33
@nobbis
Tim Field
3 years
Building a 3D scanning app on top of ARKit's meshes has a huge issue: they're not intended for that use case. When you reconstruct a static scene, you want to ignore far depth readings if you've already scanned up close, because they're less accurate.
1
1
35
@nobbis
Tim Field
5 years
#ARKit3 adds per-pixel segmentation to frames (only "person" or "none" for now) plus an estimated depth map for people in the scene.
2
7
32
@nobbis
Tim Field
2 years
I still remember Wednesday, back when Text-to-Video, Text-to-3D, and Text-to-Audio weren't things.
@FelixKreuk
Felix Kreuk
2 years
We present “AudioGen: Textually Guided Audio Generation”! AudioGen is an autoregressive transformer LM that synthesizes general audio conditioned on text (Text-to-Audio). 📖 Paper: 🎵 Samples: 💻 Code & models - soon! (1/n)
96
968
5K
3
5
34
@nobbis
Tim Field
3 years
Very nice – Google just released an open source NeRF project plus real-time web viewer. Almost makes sense to spend a couple of weeks adding a NeRF backend + viewer to @metascan3d . Would love to see how fast their raymarch shader runs on an iPhone 13.
@PeterHedman3
Peter Hedman
3 years
The SNeRG source code is now out! Check it out at if you want to bake your own NeRFs.
6
32
185
1
8
33
@nobbis
Tim Field
2 years
TIL it only takes a few seconds to remove countries from being able to download your app on the App Store.
3
4
26
@nobbis
Tim Field
4 years
Hearing Apple's locked down the ToF camera - devs don't get access to depth maps. Racking my brain to explain why they'd do that (if true.)
11
4
30
@nobbis
Tim Field
4 years
@Dusanwriter This is literally just aggregating the depth maps into a point cloud and displaying them all (around 10 million points here.) Only a few extra lines of code to save the points to a PLY file (or similar.)
4
1
25
@nobbis
Tim Field
4 years
@boringarchitect @mattmiesnieks No mesh here, just points. Yes, ARKit 2 introduced loop closure adjustment. To use it for this demo, you'd periodically place anchors and store each depth map's point cloud relative to a nearby anchor (rather than storing in world coordinates, as it does here.)
0
0
25
@nobbis
Tim Field
7 months
"It's just differentiable rendering. Been doing that since Neural Volumes/NeRF in 2019." - CV folks "It uses seed & diffuse 3D reconstruction. PatchMatch Stereo showed that in 2011." - MVS folks "Hah, it draws Gaussian splats! Try EWA splatting in 2002." - CG folks #3DGS
2
1
29
@nobbis
Tim Field
4 years
FYI: Laser-based time-of-flight sensors are a type of LiDAR. I expect Apple's new iPad Pro "LiDAR Scanner" to be similar to the Sony DepthSense IMX516 ToF camera in the Samsung Galaxy S20.
1
3
27
@nobbis
Tim Field
2 years
NeRFs have lots of limitations, but they excel on certain scenes. Awesome addition to the tool belt, alongside real-time scanning and photogrammetry.
3
4
27
@nobbis
Tim Field
5 years
"Build AR on our platform to win prizes" leaderboard: Magic Leap: ~$10,000,000 Niantic: $1,000,000 Amazon: $101,845 6D: $1,000 Placenote: $100 (+ a hoodie)
2
1
25
@nobbis
Tim Field
6 years
@sillechris Not sending meshes. Server integrates depth maps from phones into volumetric model and streams changes. Meshes built on device.
1
3
21
@nobbis
Tim Field
4 years
Latest research from Google uses dual-pixel cameras to improve depth from motion accuracy by 30%.
Tweet media one
1
0
22
@nobbis
Tim Field
4 years
Paper: "For a video of 244 frames [4s], training on 4 NVIDIA Tesla M40 GPUs takes 40 min." Each video requires training. Back of napkin: 4 M40 GPU is 1MW TDP. Assume 5W mobile TDP = 200x slower, then 40 min x 200 = 5.5 days. (Ignore thermal throttling, lower mem bandwidth, etc.)
4
2
24
@nobbis
Tim Field
2 years
Great work by Apple - looks very polished. Also, a foundational piece of the larger puzzle.
Tweet media one
1
4
23