A conversation about whether neural networks are here to replace designers has been going on for nearly a decade
However, designers are still in charge of kerning and spend weeks calculating sidebearings and letter spacing.
What are those neural networks that designers use? How do they work?
Most neural networks generating images use the GAN algorithm (Generative Adversarial Network). The GAN is trained on real examples and creates images which look as real as possible. For instance, such an algorithm can generate a photo of a person who doesn’t exist.
The algorithm consists of two neural networks, the generator and the discriminator, with the former generating images, and the latter evaluating if they are real enough and deciding whether those can be delivered to the user or not. Before sending a request to those neural networks, they need
The more photos you upload, the more efficient the generator is and the pickier the discriminator gets.
What to use to create a font? Is it the Midjourney bot?
In theory, yes. Midjourney uses the GAN algorithm, too.
You can use the Midjourney Blend command to get the arithmetic mean of two fonts, although this bot (which was originally conceived for illustration) is a better fit for lettering.
In theory, one can have Photoshop beta version to design letters: built-in AI tries to make typography as similar to an uploaded image as possible, both tone-wise and style-wise. Photosop rarely succeeds in it, however, it usually comes up with recognisable capital letters.
Photoshop. Prompt: An alternative letter R for the Apoc typeface
There’s also a ready-made model called DeepFloyd IF. Its authors trained it on text blocks (among other things), which is why the letters are recognisable, but the DeepFloyd IF is not fit for creating a finished, ready-for-service typeface, not yet.
DeepFloyd IF. Prompt: Serif typeface with numbers and punctuation
Most designers use the StyleGan generator. It has to be trained independently for each project. However, the StyleGan settings are more
Does this mean that you can already use a text font generated by a neural network?
Yes, the website of the art director with DIA Studio Daniel Wenzel features the entire Untitled AI GAN collection. Daniel and his colleague Jean Boehm picked the fonts to train this model with and then, after changing the input data almost 2,000 times, he obtained a collection of 10 fonts, six serifs and four sans serif typefaces.
Untitled Ai GAN seed0380
Untitled Ai GAN seed1259
How long does it take to teach a neural network to generate fonts?
Designers from Berlin-based NaN studio say that it takes three hours to get a more or less readable glyph, but the perfect learning time is 10 hours. It took designers 550 hours and 2,674 fonts from Google Fonts library to obtain their MachinelearningFont of 110 glyphs.
MachinelearningFont, model training process
And what about something more application-oriented?
There are tools like that. For example, in 2016, designer Keitaro Sakamoto and developer Tetsuo Sakomura came up with an AI called John
Kanji digitalisation process
The chinese NEXT Lab studio has been working on their Zizzi AI for five years now. Initially Zizzi was only capable of designing various weight styles, but by 2022 it learned to deal with widths. Since the hanzi characters consist of two parts (one conveys the meaning, and the other conveys the sound), Zizzi deals with two trained models which get combined before ending up in a discriminator.
Technically, one could do the same thing using code, but it would take way more time as they would have to specify the location of each point in a raster image.
A raster image? Does that mean that neural networks don’t work with vector graphics?
Not yet. To create Machine Learning Font and Untitled AI GAN collection, designers generated one PNG image for each glyph.
But AIs are learning. For example, Word-As-Image from the authors of Stable Diffusion recognises and handles the reference points of the letters’ outlines. In its demo version, you can choose a font to serve as a basis, enter a letter (only a letter, not just any glyph) and specify which thing you want it to transform into. Word-As-Image won’t offer you to download a new font or train your model.
The neural network is poorly trained and rarely manages to design a recognisable object, or to preserve the letter readability.
Word-as-Image in use
The design community on Twitter believes that a technology underlying the Word-As-Image has a huge potential, but in different user
Is there any useful neural network that doesn’t directly deal with the image?
Glyphs app’s forum features a separate page which addresses how users write scripts with the use of ChatGPT. For instance, Álvaro Franca from Vasava Studio shows a script that helps work with accents (you hover over a letter and it shows all its accent options, as well as other characters with similar accents). Gor Jihanian presents a script enabling generating a number of italic options with various slope angles.
A script by Álvaro Franca in use
If ChatGPT can code, can it help create a neural network?
Yes, it can, there’s even a number of guides featuring lists of promptings to enter. Although, the user will have to generate each piece of code separately, which means that anyone willing to create a neural network using ChatGPT needs to understand how it works, know Python and one of the code editors.
Most neural networks and even models have open source code, which is why if you can code, it is easier to create a new neural network that will be different from the existing
If the AI already knows how to generate fonts, separate styles and scripts for font editors, why can’t it replace designers?
A designer is still managing the entire process, both from the creative and technical perspectives. First they decide which kind of AI they need, then they choose which data to feed
Besides, any fonts or glyphs generated by AI have to be at least checked (but