Wang and Lian (2021) | DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning¶
Wang and Lian specifically address the problem of generating vector glyphs obtained from fonts. The authors build upon the work from Carlier et al. (2020), Ha and Eck (2017), and Lopes et al. (2019). The authors aim to tackle the problem that previous works were not able to synthesize visually pleasing vector glyphs and name two reasons for this issue:
Single modality – they propose to use a dual modality with vector data representation and raster image data representation
Location shift issue brought by the Mixture Distribution Network – they propose to employ a differentiable rasterizer (developed in the meantime by Li et al. (2020)) for imposing an additional restriction on the drawing commands predicted by the MDN
This paper was submitted to SIGGRAPH Asia 2021 and published in ACM Transactions on Graphics.
Available resources at a glance
Data representation¶
Like most other papers, Wand and Lian only consider 4 SVG commands.
Move – moving the drawing location (for starting a new path)
Line – drawing a line
Curve – Cubic Bézier Curve
End – ending the draw-command sequence