A standard technique in facial recognition software is to use an algorithm to create a “faceprint” of a given person and to use that faceprint to try and match a person’s face with photos. To grossly oversimplify, if you want to teach an algorithm how to distinguish a particular person (say Fanon) from a collection of other people, you need to have a big collection of photographs of people’s faces, with everyone’s faces labeled. You then take all the faces of Fanon, align them so their eyes and mouth are in the same place, and average them together. Then you take the average of all the other faces in the collection and average them together. If you subtract the average image of all the other people from the average of Fanon’s face, you end up with a “faceprint” for Fanon showing what distinguishes him from everyone else in the dataset. You can then use this faceprint to identify any future images of Fanon’s face that you might come across. These “portraits” translate those faceprints (which in their ‘native’ form are mathematical abstractions) into an image that human eyes can recognize as a face.
I’ve done a number of portraits of historical activists, artists, and philosophers in this way including portraits of Simone de Beauvoir, Samuel Beckett, and Simone Weil.