Justine Emard is a French artist based in Paris. She works across a variety of mediums including photography, video, installation and augmented reality. For several years she has been collaborating with Japanese research Professor Takashi Ikegami’s lab at the University of Tokyo. In her work Co(AI)xistence a human dancer (Mirai Moriyama) and a human-like robot interact in a variety of ways, suggesting both courtship and confrontation.

I spoke with her to find out more about her unique practice and the ideas that motivate her work.

Read More

This post discusses a speculative design project that questioned the role of technology within complex social issues like gender inequality. Drawing from feminist theory and linguistics algorithms, this project culminated in a research device that monitored the spoken ‘gendered language’ within its vicinity and revealed these patterns back to users in real time.

Read More


Daria Parkhomenko is the founder and director of LABORATORIA, an Art and Science Foundation based in Moscow, Russia. For the last ten years the foundation has promoted a dialogue at the intersection of Art and Science, realised as exhibitions, international collaborations, symposiums and conferences. Following the successful Daemons in the Machine exhibition, which presented works related to Artificial Intelligence and society, I spoke with Daria about the concepts driving the exhibition and her views on AI and Art.

Read More

Can you teach a robot to draw? We normally think of drawing as an (almost) exclusively human activity. While other animals, such as chimpanzees, can be trained to paint or draw, drawing isn’t something that is regularly observed in animals other than humans.

At SensiLab, we have been working for several years on making robots designed explicitly to create their own drawings. These drawing robots, or drawbots, are relatively simple, insect-like machines that often work cooperatively as swarms and exhibit common aspects of collective behaviour and emergent intelligence.

Read More

Searching for “mountain” in your Apple/Google photos library, browsing the Related Pins section in Pinterest, or using Google Lens to find information about the world around you - these are all instances of content-based search. With the increasing ubiquity of cameras, more and more of the data we create is visual: 400 hours worth of videos are uploaded to Youtube every minute, 80 million photos are shared on Instagram on an average day.

Read More

The NVIDIA Jetson TX2 is a great, low-power computing platform for robotics projects involving deep learning. Its ARM64 architecture means that pre-built binaries are harder to come by so I’ve documented some time-saving tips to go from initial setup to working with some popular Deep Learning and audio libraries.

Read More


Algorithmic music composition has always had a place in computing. One of the first applications Ada Lovelace proposed for the Analytic Engine almost 200 years ago was the generation of music compositions. 200 years has seen a lot of progress in this area but has also reiterated something musicians have always known… writing music is hard.

Read More

Anna Ridler is an artist and researcher who lives and works in London. She has degrees from the Royal College of Art, Oxford University, University of Arts London and have shown at a variety of cultural institutions and galleries including Ars Electronica, Sheffield Documentary Festival, Leverhulme Centre for Future Intelligence, Tate Modern and the V&A.

Read More


Here at SensiLab we are designing, implementing and evaluating improvisational interfaces - computer interfaces for improvising in creative arts domains. Ultimately we are interested in realising the notion of machines as creative collaborators, and as part of this overall goal we seek to implement improvisational behaviours in machine agents, facilitate improvised exchanges between humans and computers, and create guidelines for designing interfaces that support improvised interaction.

Read More

Deep Convolutional Neural Networks (DCNNs) have made surprising progress in synthesising high-quality and coherent images. The best performing neural network architectures in the generative domain are the class of Generative Adversarial Networks (GANs). These networks have been applied in many different scenarios: domain transfer of images, e.g. going from sunny to winter environments; generating images from text descriptions; using computer generated data to train neural networks for real world settings; and many other domains.

Read More