deep learning for computer vision - vision & learning lab

91
Deep Learning for Computer Vision Spring 2019 http://vllab.ee.ntu.edu.tw/dlcv.html (primary) https://ceiba.ntu.edu.tw/1072CommE5052 (grade, etc.) FB: DLCV Spring 2019 Yu-Chiang Frank Wang 王鈺強, Associate Professor Dept. Electrical Engineering, National Taiwan University 2019/05/29

Upload: others

Post on 15-Oct-2021

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deep Learning for Computer Vision - VISION & LEARNING LAB

Deep Learning for Computer VisionSpring 2019

http://vllab.ee.ntu.edu.tw/dlcv.html (primary)

https://ceiba.ntu.edu.tw/1072CommE5052 (grade, etc.)

FB: DLCV Spring 2019

Yu-Chiang Frank Wang 王鈺強, Associate Professor

Dept. Electrical Engineering, National Taiwan University

2019/05/29

Page 2: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Domain Adaptation• From one domain to another (e.g. USPS → SVHN)

• Multi-Source Domain Adaptation• A more practical scenario where training data are collected from

multiple sources

2

Final Challenge #1Visual Domain Adaptation Challenge (VisDA)

Page 3: Deep Learning for Computer Vision - VISION & LEARNING LAB

Final Challenge #2Wider Face and Person Challenge• ICCV 2019 Workshop Challenge:

WIDER Face Detection• http://wider-challenge.org/2019.html?fbclid=IwAR2tRgCDdF3OsSuvK6V-

D1vgUJEjGUuu4lphCm3G9Xe9BSDyRAARLYlUe00

• Since you’ve worked on the challenging HW #2…• To search for a person in a large database with just a single query image• In this challenge, you are given an image of a target cast and some candidates

(frames of a movie with person bounding boxes), you will need to search for all the instances belonging to that cast.

3Cast Candidates

Page 4: Deep Learning for Computer Vision - VISION & LEARNING LAB

Final Challenge #2Wider Face and Person Challenge• Remarks

• The dataset considered in this challenge is from movies or TV series.• The main cast (top 10 in the cast list of IMBb) are collected as the queries. • A candidate should be annotated as positive or negative, where positive

means that a candidate belongs to the cast of interest, while negative would be the incorrect/mismatch ones.

• Further produce the confidence score for each candidate according to a query (cast).

Query

4Candidates

Page 5: Deep Learning for Computer Vision - VISION & LEARNING LAB

What’s to be Covered …

• Learning Beyond Images• 2D/3D Visual Data• Depth Images

• 2 guest talks next week• Dr. Wei-Sheng Lai (b97), UC Merced (Google/MSR/Adobe Research/

Nvidia Research)• Dr. Shang-Hong Lai, Principal Researcher, MS AI R&D Center

5

Page 6: Deep Learning for Computer Vision - VISION & LEARNING LAB

From 2D to 3D Visual Data

• Robotics• Augmented Reality

• Autonomous Driving

6

http://cseweb.ucsd.edu/~haosu/slides/3ddl.pdfhttps://www.androidauthority.com/shop-amazon-augmented-reality-right-now-841238/https://arxiv.org/pdf/1711.08488.pdf

Page 7: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Deep Learning Tasks

• 3D geometry analysis• 3D synthesis

• 3D-assisted image analysis

7

Page 8: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Deep Learning Tasks

• What We Will Focus Today…

8

Page 9: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

9

Page 10: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

10

Page 11: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

11

Page 12: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

12

Page 13: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

13

Page 14: Deep Learning for Computer Vision - VISION & LEARNING LAB

Representations of 3D Data

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

14

Page 15: Deep Learning for Computer Vision - VISION & LEARNING LAB

In this lecture, we mainly focus On…

• Multi-view RGB(D) Images• Volumetric

• Polygonal Mesh• Point Cloud• Primitive-based CAD Models

15

Page 16: Deep Learning for Computer Vision - VISION & LEARNING LAB

Can We Directly Apply CNN on 3D Data?

16

Page 17: Deep Learning for Computer Vision - VISION & LEARNING LAB

Can We Directly Apply CNN on 3D Data?

• What Kind of 3D Data?• (O) Multi-view RGB(D) Images, Volumetric• (X) Polygonal Mesh, Point Cloud, Primitive-based CAD Models

17

Page 18: Deep Learning for Computer Vision - VISION & LEARNING LAB

Can We Directly Apply CNN on 3D Data?

18

• Convolution for 2D images:• Convolution in 3D:

Page 19: Deep Learning for Computer Vision - VISION & LEARNING LAB

Multi-view Representation

● Able to leverage a huge amount of CNN literatures in image analysis● What viewpoints to select?● What if the input is noisy and incomplete?● Does not process invisible points● Aggregating view representations is challenging (not trivial)

19

Page 20: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Multi-View Representation

• Classification• Multi-view CNN (MVCNN) for shape recognition [ICCV’15]

• Segmentation • 3D shape segmentation with projective conv nets [CVPR’17]

20

Page 21: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification: MVCNN

• State-of-the-art performances for 3D classification (>90%)• View pooling (all branches in the first stage of the network share the

same parameters in CNN1)

21

Page 22: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification: MVCNN

• Synthesize the info from all views into a single & compact 3D shape descriptor

• Element-wise max operation across views in the view-pooling layer

• Closely related to max-pooling and max-out layers, with the only difference in the dimensions which are performed

• Features are obtained after view pooling.

22

Page 23: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Multi-View Representation

• Classification• Multi-view CNN (MVCNN) for shape recognition [ICCV’15]

• Segmentation • 3D shape segmentation with projective conv nets [CVPR’17]

23

Page 24: Deep Learning for Computer Vision - VISION & LEARNING LAB

Segmentation: ShapePFCN

• Combines image-based FCNs and surface-based CRFs (conditional random field)

• Preprocess 3D mesh data into shaded images and depth images

24

Page 25: Deep Learning for Computer Vision - VISION & LEARNING LAB

Segmentation: ShapePFCN

• FCN:• VGG-16 pretrained network• Two additional modifications1. Input is a 2-channel image (2-channel 3x3 filters instead of 3-channel RGB ones)2. Modify the output of the original FCN. The original FCN outputs L confidence maps

of size 64 x 64 pixels, followed by a conversion into L probability maps via softmax. Instead, ShapePFCN upsample the confidence maps to size 512 x 512 pixels through a transpose convolutional layer.

25

Page 26: Deep Learning for Computer Vision - VISION & LEARNING LAB

Segmentation: ShapePFCN

• Image2Surface Projection Layer• Aggregate the confidence maps across views, and project back to the 3D surface.• L confidence maps extracted from FCN are stacked into a M x 512 x 512 x L image.

The projection layer takes input this 4D image (M input images).• Also take the surface reference image, and stacked into a 3D M x 512 x 512 image.• The layer outputs a FS x L array, where FS is the number of polygons of the shape S.

26

Page 27: Deep Learning for Computer Vision - VISION & LEARNING LAB

Segmentation: ShapePFCN

• Surface CRF (Conditional Random Fields)• Convert label confidences (i.e., soft labels) into hard labels• In addition, due to upsampling in FCN, processing of discontinuities across complex

surface (via CRF) might be expected.

27

Page 28: Deep Learning for Computer Vision - VISION & LEARNING LAB

Segmentation: ShapePFCN

• Example results

28

Page 29: Deep Learning for Computer Vision - VISION & LEARNING LAB

Properties of Volumetric Data

• Easy to operate• Info Ioss in voxelization

• But low resolution…

29

Page 30: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Volumetric/Voxel Data

• Classification• 3D ShapeNets: A Deep Representation for Volumetric Shapes [CVPR’15]

• 3D Reconstruction• 3D-R2N2: A Unified Approach for Single and Multi-View 3D Object Reconstruction

[ECCV’16]• Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction

without 3D Supervision [NIPS’16]• Weakly Supervised 3D Reconstruction with Adversarial Constraint [3DV’17]

30

Page 31: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification: 3DShapeNets

• Directly utilize 3D shape info

• Accuracy: ~77%

• MVCNN outperforms volumetric methodsin terms of classification.

31

Page 32: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification: 3DShapeNets

• Why MVCNN works better?• Leverage the capabilities of 2D-based DL/CNN models • With the help of a large amount of 2D image data (e.g., ImageNet) to pretrain the

CNN architectures.

32

Page 33: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Volumetric/Voxel Data

• Classification• 3D ShapeNets: A Deep Representation for Volumetric Shapes [CVPR’15]

• 3D Reconstruction• 3D-R2N2: A Unified Approach for Single and Multi-View 3D Object Reconstruction

[ECCV’16]• Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction

without 3D Supervision [NIPS’16]• Weakly Supervised 3D Reconstruction with Adversarial Constraint [3DV’17]

33

Page 34: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: 3D-R2N2

• Supervised Learning

• Recurrent 3D CNN

• 3D LSTM units

• Single or multi-view 3D reconstruction

• Voxel-wise cross-entropy loss

34

Page 35: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: 3D-R2N2

• Supervised learning

• Ground truth 3D volume/voxels available

• Voxel-wise cross-entropy loss

35

Page 36: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: 3D-R2N2

• 3D LSTM units

• Input can be either single or a series images.

• Resolve multiple viewpoints seamlessly

36

Page 37: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: 3D-R2N2

• Example construction results (left: single image input, right: multiple image inputs)

37

Page 38: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Volumetric/Voxel Data

• Classification• 3D ShapeNets: A Deep Representation for Volumetric Shapes [CVPR’15]

• 3D Reconstruction• 3D-R2N2: A Unified Approach for Single and Multi-View 3D Object Reconstruction

[ECCV’16]• Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction

without 3D Supervision [NIPS’16]• Weakly Supervised 3D Reconstruction with Adversarial Constraint [3DV’17]

38

Page 39: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: PTN

• Projecting 3D volume into 2D masks

39

Page 40: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction: PTN

• Loss function:• Reconstruction loss

• Projection loss

40

Page 41: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Volumetric/Voxel Data

• Classification• 3D ShapeNets: A Deep Representation for Volumetric Shapes [CVPR’15]

• 3D Reconstruction• 3D-R2N2: A Unified Approach for Single and Multi-View 3D Object Reconstruction

[ECCV’16]• Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction

without 3D Supervision [NIPS’16]• Weakly Supervised 3D Reconstruction with Adversarial Constraint [3DV’17]

41

Page 42: Deep Learning for Computer Vision - VISION & LEARNING LAB

Weakly Supervised 3D Reconstruction

• Image and approximated viewpoints as inputs

• 2D masks as supervision

• Raytrace pooling layer enables perspective projection and backpropagation

• Constrain 3D reconstruction to the manifold of unlabeled realistic 3D shapes

42

Page 43: Deep Learning for Computer Vision - VISION & LEARNING LAB

Weakly Supervised 3D Reconstruction

• Image and approximated viewpoints as inputs

• 2D masks as supervision

• Raytrace pooling layer enables perspective projection and backpropagation

• Constrain 3D reconstruction to the manifold of unlabeled realistic 3D shapes

43

Page 44: Deep Learning for Computer Vision - VISION & LEARNING LAB

Point Cloud

• Unordered point set

• The same object can be represented in different orders (see below).

• Need to be invariant to feature transformation (e.g., rotation, translation, etc.)

44

Page 45: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Point Cloud

• Classification/segmentation• PointNet: Deep Learning on Point Sets for 3D Classification/Segmentation [CVPR’17]

• 3D Reconstruction• A Point Set Generation Net for 3D Object Reconstruction from a Single Image [CVPR’17]

• Unsupervised Learning (i.e., Autoencoder for Data Recovery)• FoldingNet: Interpretable Unsupervised Learning on 3D Point Clouds [CVPR’18]

45

Page 46: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification/Segmentation

• Classification/segmentation via Point Clouds

46

Page 47: Deep Learning for Computer Vision - VISION & LEARNING LAB

Classification/Segmentation via PointNet

• Unordered -> per-point perceptron + max pooling

• Interaction among points: concatenate local and global features

• Invariance under transformation

47

Page 48: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Point Cloud

• Classification/segmentation• PointNet: Deep Learning on Point Sets for 3D Classification/Segmentation [CVPR’17]

• 3D Reconstruction• A Point Set Generation Net for 3D Object Reconstruction from a Single Image [CVPR’17]

• Unsupervised Learning (i.e., Autoencoder for Data Recovery)• FoldingNet: Interpretable Unsupervised Learning on 3D Point Clouds [CVPR’18]

48

Page 49: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction via Point Cloud

• A Point Set Generation Net for 3D Object Reconstruction from a Single Image

49

Page 50: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction via Point Cloud

• Settings:• Supervised learning with ground truth point clouds• 3D shapes in an unordered point set• Two-branch prediction: fully connected for intrinsic structure +

deconvolution for smooth surfaces• Loss function: Chamfer distance

50

Page 51: Deep Learning for Computer Vision - VISION & LEARNING LAB

3D Reconstruction via Point Cloud

• Example results

51

Page 52: Deep Learning for Computer Vision - VISION & LEARNING LAB

Related Work on Point Cloud

• Classification/segmentation• PointNet: Deep Learning on Point Sets for 3D Classification/Segmentation [CVPR’17]

• 3D Reconstruction• A Point Set Generation Net for 3D Object Reconstruction from a Single Image

[CVPR’17]

• Unsupervised Learning (i.e., Autoencoder for Data Recovery)• FoldingNet: Interpretable Unsupervised Learning on 3D Point Clouds [CVPR’18]

52

Page 53: Deep Learning for Computer Vision - VISION & LEARNING LAB

Unsupervised Learning for Point Clouds

• Autoencoder for 3D Point Clouds

53

Page 54: Deep Learning for Computer Vision - VISION & LEARNING LAB

FoldingNet for Recovering Point Clouds

• Graph representation for point clouds

• Analogous to convolution in images. Each pixel’s spatial ordering and neighborhood remain unchanged even when feature channels of the input image expands in top conv layers.

54

Page 55: Deep Learning for Computer Vision - VISION & LEARNING LAB

Encoder in FoldingNet

• n: number of points in input point cloud

• Use kNN graph, compute local covariance of k = 16 points along xyz

• Perceptron: per-point function

55

Page 56: Deep Learning for Computer Vision - VISION & LEARNING LAB

Encoder in FoldingNet

• n: number of points in input point cloud

• Use kNN graph, compute local covariance of k = 16 points along xyz

• Perceptron: per-point function

• Graph layer: perceptron + graph max pooling

56

Page 57: Deep Learning for Computer Vision - VISION & LEARNING LAB

Decoder in FoldingNet

57

Page 58: Deep Learning for Computer Vision - VISION & LEARNING LAB

Remarks for FoldingNet

• Transfer classification accuracy

• Efficiency of representation learning and feature extraction

• Use point clouds from ShapeNet to train autoencoder

• Classification:Train a SVM on another dataset using codewords obtained from the encoder

58

Page 59: Deep Learning for Computer Vision - VISION & LEARNING LAB

What’s to be Covered …

• Learning Beyond Images• 2D/3D Visual Data• Depth Images

59

Page 60: Deep Learning for Computer Vision - VISION & LEARNING LAB

Depth Estimation from a Single Image

• Depth estimation from a single image in a semi-supervised setting• Use supervised and unsupervised cues simultaneously

• Supervised cue: Sparse depth image• Unsupervised cue: Stereo pair consistency

60Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 61: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Overall loss function:

61Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 62: Deep Learning for Computer Vision - VISION & LEARNING LAB

• The supervised loss measures the deviation of the predicted depth map from the ground truth depth values.

62Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 63: Deep Learning for Computer Vision - VISION & LEARNING LAB

• The unsupervised loss quantifies the direct image alignment error in both directions:

63Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 64: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Network architecture: ResNet encoder + decoder

64Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 65: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Example results

65Semi-Supervised Deep Learning for Monocular Depth Map Prediction, CVPR 2017

Page 66: Deep Learning for Computer Vision - VISION & LEARNING LAB

Unsupervised Depth Estimation:Depth Image Estimation from a Single Image

• Estimate the depth image from a single RGB input image without supervision

• Render the disparity map from a single image• Depth info can be estimated from the disparity map• Disparity map is able to warp the left image to the right image (and vice versa)• Training data: stereo image pairs only

66Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017

Page 67: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Simultaneously render the disparity maps from both views

• Enforce the consistency between both recovered disparity mapswhich lead to accurate results w/o supervision during training.

67Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017

Unsupervised Depth Estimation:Depth Image Estimation from a Single Image (cont’d)

Page 68: Deep Learning for Computer Vision - VISION & LEARNING LAB

68Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017

● Reconstruction loss:

● Smoothness loss:

● consistency loss:

Unsupervised Depth Estimation:Depth Image Estimation from a Single Image (cont’d)

Page 69: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Estimate the depth image by inferring the disparity maps (left & right) from the single input image.

69Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017

Unsupervised Depth Estimation:Depth Image Estimation from a Single Image (cont’d)

Page 70: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Example results

70Unsupervised Monocular Depth Estimation with Left-Right Consistency, CVPR 2017

Unsupervised Depth Estimation:Depth Image Estimation from a Single Image (cont’d)

Page 71: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Goal: depth estimation from a single image/frame+ camera pose estimation from multiple consecutive frames

• Challenge• No supervision (i.e., ground truth info available)• Only input video sequence available

• Method• Use single-view depth and pose networks for video frame recovery

• Experiments• KITTI, CityScape, and Make3D datasets

71Unsupervised Learning of Depth and Ego-motion from Video, CVPR 2017 (oral)

Unsupervised Depth Estimation:Depth Image Estimation from a Video Sequence

Page 72: Deep Learning for Computer Vision - VISION & LEARNING LAB

Introduction

72

Goal (Inference)

• Estimate the depth image and camera-pose without supervision of ground-truth depth, stereo pair, and camera-pose information

Method• The key supervision signal for the proposed method come from

• View SynthesisSynthesize a new image of scene seen from a different camera pose

Page 73: Deep Learning for Computer Vision - VISION & LEARNING LAB

MethodOverview

Target Frame

DepthInformation

PoseInformation

Predicted Target Image

NearbyFrame

Training Sequence

PoseNN

Dep.NN

Page 74: Deep Learning for Computer Vision - VISION & LEARNING LAB

Method

74

Cam-Pose NN

Depth NN

Target View

Transformation matrix

Time

𝐼𝐼𝑡𝑡+1 = 𝐾𝐾�𝑇𝑇𝑡𝑡→𝑡𝑡+1�𝐷𝐷𝑡𝑡 𝐼𝐼𝑡𝑡 𝐾𝐾−1𝐼𝐼𝑡𝑡

PredictedCam-Pose

PredictedDepth

RGB imagein time t

RGB imagein time t+1

Cam. parameter

ℒ = �𝑡𝑡|𝐼𝐼𝑡𝑡+1 − 𝐼𝐼𝑡𝑡+1|

Concept Render the target

Page 75: Deep Learning for Computer Vision - VISION & LEARNING LAB

ExperimentDepth Estimation

2019/5/28 75

• Dataset• KITTI and Cityscape

Make3D

• KITTI and Cityscape

Page 76: Deep Learning for Computer Vision - VISION & LEARNING LAB

• 3D reconstruction from a single image (with GT 3D but no pose info)

• A single-image & pose-aware 3D reconstruction DL framework

• Extract camera pose info from 2D-3D self-consistency without supervision

76Liao, Yang, Lin, Chen, Kuo, Chiu, & Wang, Learning Pose-Aware 3D Reconstruction via 2D-3D Self-Consistency, IEEE ICASSP 2019.

Learning Pose-Aware 3D Reconstructionvia 2D-3D Self-Consistency

Page 77: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Experiments

77Liao, Yang, Lin, Chen, Kuo, Chiu, & Wang, Learning Pose-Aware 3D Reconstruction via 2D-3D Self-Consistency, IEEE ICASSP 2019.

Learning Pose-Aware 3D Reconstructionvia 2D-3D Self-Consistency

(a) Input image (b) GT mask(c) Predicted mask (d) Projection of predicted shapes

(e) GT voxel (f) Predicted voxel(g) GT pose-aware mesh (h) Predicted pose-aware mesh

Page 78: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Experiments

78Liao, Yang, Lin, Chen, Kuo, Chiu, & Wang, Learning Pose-Aware 3D Reconstruction via 2D-3D Self-Consistency, IEEE ICASSP 2019.

Learning Pose-Aware 3D Reconstructionvia 2D-3D Self-Consistency

Comparison with other fully supervised method in terms of IoU.

Quantitative result of pose-aware 3D shape reconstruction in term of IoU. pred means that shapes or poses are predicted.

Quantitative result of 3D-2D projection in term of IoU. We evaluate IoU between GT masks and different projections.

Quantitative result of pose estimation and mask segmentation.

Page 79: Deep Learning for Computer Vision - VISION & LEARNING LAB

• Unsupervised monocular depth estimation

• Joint exploitation of scene semantics via semantic segmentation

• No GT for the depth image

• Improved scene representation for joint depth estimation &semantic segmentation

79P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation

1National Taiwan University

Yen-Cheng Liu2 Yu-Chiang Frank Wang1Alexander H. Liu* 1Po-Yi Chen* 1

2 Georgia Institute of Technology

Page 80: Deep Learning for Computer Vision - VISION & LEARNING LAB

Introduction – Unsupervised Monocular Depth Estimation

Monocular Depth Estimation

Model

Unsupervised training with stereo view

ModelLeft view

Right view

ReconstructLeft view

2D Image Disparity Map(Depth)

wrapping

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 81: Deep Learning for Computer Vision - VISION & LEARNING LAB

Introduction – Our Goal

Learning semantic-aware scene representation for depth estimation.

SemanticSegmentatio

n

SemanticUnderstanding

Content ConsistencyScene

Representation

2D Scene

DepthEstimation

GeometricUnderstanding

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 82: Deep Learning for Computer Vision - VISION & LEARNING LAB

Methodology – Overview

- Scene representation from a single network- Multi-task learning on depth estimation and semantic segmentation- Refine depth estimation with semantic information

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 83: Deep Learning for Computer Vision - VISION & LEARNING LAB

Methodology – Shared Scene Representation

Unified network architecture Controllable cross-modal prediction Multi-task learning from disjoint dataset

EncDec Softmax

Sigmoid

Pixel-wisedAvg. Pooling

+

Semantic labels

Disparity map

Task Identity

SceneRepresentation

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 84: Deep Learning for Computer Vision - VISION & LEARNING LAB

Methodology – Self-supervised learning

1. Left-Right Semantic Consistency- Unsupervised learning relied on left-right consistency over color value (RGB image)- Such consistency may be affect by optical change (e.g., reflection on glass)- We proposed left-right consistency at the semantic level

Model

Reconstruct

Left view

Consistency

Left view

Right view

Left disparity

Model

ReconstructLeft semantic

Left view

Right view

Left disparity

Model

Right semantic

Left semantic

Consistency

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 85: Deep Learning for Computer Vision - VISION & LEARNING LAB

Methodology – Self-supervised learning

2. Semantic-Guided Disparity Smoothness- Disparity should change smoothly on a single object- Pseudo object boundary can be obtained from semantic prediction- We proposed to regularize the disparity smoothness within the boundary

Semantic Prediction Object Boundary Regularize Smoothness

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 86: Deep Learning for Computer Vision - VISION & LEARNING LAB

Experiments – Setup

- Dataset- Stereo pairs from KITTI dataset (for unsupervised depth

estimation)- Single-view from Cityscapes dataset (for supervised semantic

segmentation)- Model

- Encoder : 14-layered dilated residual network- Decoder : 8-layered transposed convolution network- Instead of using separated decoder for each view, we

introduce horizontally flipping technique

Page 87: Deep Learning for Computer Vision - VISION & LEARNING LAB

Experiments – Result on depth estimation

- State-of-the-art result on unsupervised depth estimation- Leveraging semantic segmentation, the performance can be further improved

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 88: Deep Learning for Computer Vision - VISION & LEARNING LAB

Experiments – Study on multi-task learning

- Semantic segmentation & depth estimation benefits each other- Shared decoder & task identity improves the robustness of multi-task learning- A better scene representation by our framework

P. Chen, A. Liu, Y. Liu, & Y.-C. F. Wang, Towards Scene Understanding:Unsupervised Monocular Depth Estimation with Semantic-aware Representation. CVPR 2019

Page 89: Deep Learning for Computer Vision - VISION & LEARNING LAB

References

● 3D Deep Learning Tutorial○ http://3ddl.stanford.edu/CVPR17_Tutorial_Overview.pdf○ http://3ddl.stanford.edu/CVPR17_Tutorial_MVCNN_3DCNN_v3.pdf○ http://cseweb.ucsd.edu/~haosu/slides/3ddl.pdf○ https://cse291-i.github.io/

● List of 3D deep learning related projects

○ https://github.com/timzhang642/3D-Machine-Learning

89

Page 90: Deep Learning for Computer Vision - VISION & LEARNING LAB

What’s to Be Covered Next Week

• Guest Lecture• Title:

NTUEE系友有問必答系列CV領域求學、研究及實習經驗分享

• Speaker:Dr. Wei-Sheng Lai 賴威昇 (B97)Univ. California, Merced

• Time/Location:10am @ BL113 (i.e., the 2nd class)

90

Page 91: Deep Learning for Computer Vision - VISION & LEARNING LAB

What’s to Be Covered Next Week

• VIP Talk• Title:

Face Recognition & Anti-Spoofing for Identity Authentication

• Speaker:Dr. Shang-Hong LaiPrincipal Researcher, Microsoft AI R&D Center

• Time/Location:11am @ BL113 (i.e., the 3rd class)

91