Art or Not

Screenshots of the app
Art or Not is an AI powered art critic that uses its understanding of over 100,000 artworks to figure out whether things around you are ‘art’ or not ‘art’. The app uses machine learning to decompose the image you capture into important features. These features are then sent to the ‘brain’ of the app which determines if what you captured fits its definition of art.


This post is a behind-the-scenes look at different aspects of the app, to read about the what and the why, see our research page.

The Artworks

Artworks from the dataset

Data is the secret sauce that powers all machine learning applications, and Art or Not is no different. Our app uses artworks from the Met (Metropolitan Museum of Art) and Art Institute of Chicago, totalling to ~110,000 items. There are many different mediums (paintings, sculptures, ceramics), as well as periods in art represented in the dataset.

Access to artworks has been notoriously difficult in the past, but this changed in 2017 when the Met changed it’s access policy: allowing unrestricted use of artworks in the public domain with a CC0 (Creative Commons Zero) license. This meant that anyone could download one of the 375,000 public domain artworks and use it however they liked–without needing to ask for permission. There were other well known museums (Rijksmuseum for example) that took this step well before the Met, but with much smaller sized collections. Art Institute of Chicago (50,000), the British Library (over a million items), and Smithsonian Design Museum (200,000) have followed with their own open-access policy announcements, pushing forward artwork accessibility.

Visually Similar Artworks

This is the feature that I love most about the app: capture a simple scene/texture/composition, and then discover an amazing array of similar looking artworks. As the network has its own measure of similarity, it can return some very unexpected results, leading to a very serendipitous exploration. In other cases, it may return artworks that are eerily similar to the (mostly mundane) captures of everyday life, which are somehow even more joyous to scroll through.

The demo below (taken from one of our posts) gives a good demonstration of visual similarity search. It’s important to underline that this is isn’t like Google’s “search by image” option, which is trying to find matching images, and not necessarily similar ones. See our post for a complete breakdown of how it all works.

This demo is smaller version of what happens inside the app. Your click on artwork below is the equivalent of capturing a photo with our app. Instead of finding similar results from a dataset of 10,000 artworks, the app performs the real-time search across more than a 100,000 artworks.

Tips for interaction:

  • Select any of the artworks to perform a search
  • Switch between network architectures to see how results differ.
  • Hit “refresh choices” to get a fresh new batch of artworks to pick from.
  • Hover over artworks to find out the artist and the name of the work.

Note: Due to server limitations, search results have been pre-computed for this demo. Important to note that it’s not a limitation of the search being slow–as demonstrated by our app :)

Network: inception-v4 resnet-50 vgg-16
Selected Artwork
Similar Artworks
Select an Artwork
refresh choices

CoreML

On-device machine learning made a lot of sense for the app as it meant zero server costs, infinite scalability, and no privacy concerns. Smartphones as a whole are moving towards dedicated neural processing units which means that performing inference on-device is extremely quick. Obviously this varies case to case, and may not suit when dealing with highly customised or extremely large architectures.

On iOS, taking advantage of the on-device performance benefits requires adapting your model to Apple’s CoreML format. I prefer using PyTorch as my ML framework, and it just so happens that coremltools (Apple’s handy model conversion utility) doesn’t directly support PyTorch models. So the model conversion process involves going from PyTorch -> ONNX (open deep learning format) -> CoreML. The multistep conversion in itself isn’t cumbersome, but if you want to do anything that isn’t default, you’ll have to dig through the (sparse) documentation and their GitHub repository.

CoreML is definitely worth the effort, but it’s important to know that it may not be trivial to implement–especially when using PyTorch.

Try it Out!

Go download the app from the App Store and tell us what you think!

About the Author

dilpreet Dilpreet is a member of the SensiLab Creative AI team and develops applications using deep learning for artistic and aesthetic expression. Follow Dilpreet on twitter

Website

dilpreetsingh.me

Discuss this Article

ai.SensiLab Podcast Episode 11 - Art...or not? ai.SensiLab Podcast Episode 10 - Surveillance AI