Can you teach a robot to draw? We normally think of drawing as an (almost) exclusively human activity. While other animals, such as chimpanzees, can be trained to paint or draw, drawing isn’t something that is regularly observed in animals other than humans.
At SensiLab, we have been working for several years on making robots designed explicitly to create their own drawings. These drawing robots, or drawbots, are relatively simple, insect-like machines that often work cooperatively as swarms and exhibit common aspects of collective behaviour and emergent intelligence.
Searching for “mountain” in your Apple/Google photos library, browsing the Related Pins section in Pinterest, or using Google Lens to find information about the world around you - these are all instances of content-based search. With the increasing ubiquity of cameras, more and more of the data we create is visual: 400 hours worth of videos are uploaded to Youtube every minute, 80 million photos are shared on Instagram on an average day.
Deep Convolutional Neural Networks (DCNNs) have made surprising progress in synthesising high-quality and coherent images. The best performing neural network architectures in the generative domain are the class of Generative Adversarial Networks (GANs). These networks have been applied in many different scenarios: domain transfer of images, e.g. going from sunny to winter environments; generating images from text descriptions; using computer generated data to train neural networks for real world settings; and many other domains.