HOME > 상세정보

상세정보

Towards efficient and scalable densely connected convolutional neural networks for classification and segmentation

Towards efficient and scalable densely connected convolutional neural networks for classification and segmentation

자료유형
학위논문
개인저자
Lodhi, Bilal Ahmed
서명 / 저자사항
Towards efficient and scalable densely connected convolutional neural networks for classification and segmentation / Bilal Ahmed Lodhi
발행사항
Seoul :   Graduate School, Korea University,   2019  
형태사항
vii, 40장 : 도표 ; 26 cm
기타형태 저록
Towards Efficient and Scalable Densely Connected Convolutional Neural Networks For Classification and Segmentation   (DCOLL211009)000000084349  
학위논문주기
학위논문(박사)-- 고려대학교 대학원: 컴퓨터·전파통신공학과, 2019. 8
학과코드
0510   6YD36   368  
일반주기
지도교수: 강재우  
서지주기
참고문헌: 장 37-40
이용가능한 다른형태자료
PDF 파일로도 이용가능;   Requires PDF file reader(application/pdf)  
비통제주제어
Deep Learning, Computer Vision, Multipath-DenseNet, DenseNet, Densenet, image classification,,
000 00000nam c2200205 c 4500
001 000045999322
005 20191017132659
007 ta
008 190624s2019 ulkd bmAC 000c eng
040 ▼a 211009 ▼c 211009 ▼d 211009
085 0 ▼a 0510 ▼2 KDCP
090 ▼a 0510 ▼b 6YD36 ▼c 368
100 1 ▼a Lodhi, Bilal Ahmed
245 1 0 ▼a Towards efficient and scalable densely connected convolutional neural networks for classification and segmentation / ▼d Bilal Ahmed Lodhi
260 ▼a Seoul : ▼b Graduate School, Korea University, ▼c 2019
300 ▼a vii, 40장 : ▼b 도표 ; ▼c 26 cm
500 ▼a 지도교수: 강재우
502 1 ▼a 학위논문(박사)-- ▼b 고려대학교 대학원: ▼c 컴퓨터·전파통신공학과, ▼d 2019. 8
504 ▼a 참고문헌: 장 37-40
530 ▼a PDF 파일로도 이용가능; ▼c Requires PDF file reader(application/pdf)
653 ▼a Deep Learning, Computer Vision, Multipath-DenseNet, DenseNet, Densenet, image classification
776 0 ▼t Towards Efficient and Scalable Densely Connected Convolutional Neural Networks For Classification and Segmentation ▼w (DCOLL211009)000000084349
900 1 0 ▼a 강재우 ▼g 姜在雨, ▼e 지도교수
945 ▼a KLPA

전자정보

No. 원문명 서비스
1
Towards efficient and scalable densely connected convolutional neural networks for classification and segmentation (29회 열람)
PDF 초록 목차
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/학위논문서고/ 청구기호 0510 6YD36 368 등록번호 123062331 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/학위논문서고/ 청구기호 0510 6YD36 368 등록번호 123062332 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 3 소장처 세종학술정보원/5층 학위논문실/ 청구기호 0510 6YD36 368 등록번호 153083334 도서상태 대출가능 반납예정일 예약 서비스
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/학위논문서고/ 청구기호 0510 6YD36 368 등록번호 123062331 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/학위논문서고/ 청구기호 0510 6YD36 368 등록번호 123062332 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 세종학술정보원/5층 학위논문실/ 청구기호 0510 6YD36 368 등록번호 153083334 도서상태 대출가능 반납예정일 예약 서비스

컨텐츠정보

초록

As an intelligent technology, artificial neural network is strong architecture for self-learning and it's adaptive ability makes it good at association, generalization, analogy and extension. Neural Networks have regained popularity in recent years because of the increased computational power. Many applications such as face recognition, medical images analysis, and self-driving require neural networks to make efficient decisions. Convolutional Neural Network (CNN) is a special kind of neural network which shares learn-able weights and biases. CNN is proven very effective in areas of image recognition and classification and most commonly applied to computer vision tasks of learning complex features. These complex features were being hand engineered before CNN. LeNet-5, AlexNet and VGG networks are the classical convolutional neural networks for classification. The architecture of these network allowed CNN to have multiple layers. But training very deep neural network suffers from problems like vanishing and exploding gradients. ResNet is one of the architecture that surpasses other competitors in multiple challenges. ResNet architecture allowed to network to carry gradient until deeper layers of the network through identity/residual connection. ResNet didn't solve the vanishing gradient problem rather, it is avoided. 
In recent years, a newly proposed network named as DenseNet gained attention of researchers because of its promising results on large scale datasets. DenseNet avoids the vanishing gradient problem by connecting a layer with every other layer. The deeper layers of the network will have a direct connection with initial layers of the network, therefore DenseNet can carry strong gradient magnitude till the deeper layers of the network. But due to this dense connectivity, the State-of-the-art DenseNet network becomes computational inefficient and too large to fit in the memory. DenseNet network require quadratic memory as well as computational resources. Moreover, this dense connectivity hinders the learning of DenseNet on complex classification tasks. In this thesis, we study the architecture and behavior of DenseNet to alleviate the aforementioned issues of DenseNet. We prove that DenseNet makes multiple shorter connections in the deeper layers of the network that can be processed parallel to reduce the complexity and resources requirement. This modular approach allowed our proposed network to achieve significant improvements over DenseNet with lesser parameters. Furthermore, we also extend our classification architecture, in encoder-decoder fashion, to deal with segmentation problem.

목차

Abstract
Acknowledgement
Contents i
List of Figures iii
List of Tables vi
1 Introduction 1
2 Multipath-DenseNet: A Supervised Ensemble Architecture of Densely Connected 
Convolutional Networks 4
 2.1 Background and Problem Definition 4
 2.2 Related work  6
 2.3 Methodology 8
  2.3.1 Network Ensemble  8
  2.3.2 DenseNet-denseblocks 9
  2.3.3 Multipath DenseNet 10
  2.3.4 Implementation Details  13
 2.4 Experiments 13
  2.4.1 Dataset 13
  2.4.2 Augmentation  14
  2.4.3 Training Setup  14
 2.5 Results and Discussion  14
  2.5.1 Classification Results on CIFAR  17
  2.5.2 Classification Results on SVHN 17
  2.5.3 Classification on ImageNet  18
  2.5.4 Discussion  19
3 SenseNet: Densely Connected Fully Convolutional Network with Bottleneck Skip 
Connection For Image Segmentation 22
 3.1 Background and Problem Definition  22
 3.2 Related literature  24
 3.3 Methodology 26
  3.3.1 DenseNet-BC  26
  3.3.2 Dense block ensemble and skip connection26
  3.3.3 DenseNet to SenseNet  31
 3.4 Experiments 31
  3.4.1 CamVid  32
  3.4.2 Training Setup  32
 3.5 Results 34
  3.5.1 Discussion  35
4 Conclusion 36
Bibliography 37