000 | 00000nam c2200205 c 4500 | |
001 | 000045999329 | |
005 | 20230530104119 | |
007 | ta | |
008 | 190624s2019 ulka bmAC 000c eng | |
040 | ▼a 211009 ▼c 211009 ▼d 211009 | |
085 | 0 | ▼a 0510 ▼2 KDCP |
090 | ▼a 0510 ▼b 6D36 ▼c 1101 | |
100 | 1 | ▼a 박용규 ▼g 朴龍圭 |
245 | 1 0 | ▼a Typeface completion with generative adversarial networks / ▼d Yonggyu Park |
260 | ▼a Seoul : ▼b Graduate School, Korea University, ▼c 2019 | |
300 | ▼a vi, 35장 : ▼b 삽화(일부천연색) ; ▼c 26 cm | |
500 | ▼a 지도교수: 강재우 | |
502 | 0 | ▼a 학위논문(석사)-- ▼b 고려대학교 대학원, ▼c 컴퓨터·전파통신공학과, ▼d 2019. 8 |
504 | ▼a 참고문헌: 장 32-35 | |
530 | ▼a PDF 파일로도 이용가능; ▼c Requires PDF file reader(application/pdf) | |
653 | ▼a MachineLearning ▼a GenerativeAdversarialNetwork | |
776 | 0 | ▼t Typeface Completion with Generative Adversarial Networks ▼w (DCOLL211009)000000084648 |
900 | 1 0 | ▼a Park, Yong-gyu, ▼e 저 |
900 | 1 0 | ▼a 강재우, ▼g 姜在雨, ▼d 1969-, ▼e 지도교수 ▼0 AUTH(211009)151698 |
945 | ▼a KLPA |
Electronic Information
No. | Title | Service |
---|---|---|
1 | Typeface completion with generative adversarial networks (14회 열람) |
View PDF Abstract Table of Contents |
Holdings Information
No. | Location | Call Number | Accession No. | Availability | Due Date | Make a Reservation | Service |
---|---|---|---|---|---|---|---|
No. 1 | Location Science & Engineering Library/Stacks(Thesis)/ | Call Number 0510 6D36 1101 | Accession No. 123062333 | Availability Available | Due Date | Make a Reservation | Service |
No. 2 | Location Science & Engineering Library/Stacks(Thesis)/ | Call Number 0510 6D36 1101 | Accession No. 123062334 | Availability Available | Due Date | Make a Reservation | Service |
Contents information
Abstract
The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.
Table of Contents
Abstract Contents List of Figures List of Tables 1. Introduction 2. Related Works 2.1. Image-to-Image Translation 2.2. Character Image Generation 3. Task Definition 4. Proposed Model 4.1. Encoders 4.1.1. Typeface and Content Feature 4.1.2. Encoder Pretraining 4.2. Generator 4.2.1. Feature Combination 4.2.2. Image Generation 4.3. Discriminator 4.4 Training Process 4.4.1. Identity Loss 4.4.2. SSIM Loss 4.4.3. Adversarial Losses 4.4.4. Reconstruction Loss 4.4.5. Perceptual Reconstruction Loss 4.4.6. Discriminator Loss 4.5. Test Process 5. Evaluation 5.1. Datasets 5.1.1. Chinese Character 5.1.2. English Character 5.2. Metrics 5.2.1. SSIM 5.2.2. L1 distance 5.2.3. Classification Accuracy 5.3. Implementation Details 5.4. Baselines 5.4.1. CycleGAN 5.4.2. MUNIT 5.4.3. StarGAN 5.5. Experiment 5.5.1. Typeface Completion 5.5.2. Character Reconstruction 5.5.3. Ablation Study 5.5.4. Face Generation 6. Analyze 7. Conclusion