Artist or not, when you pick up a pen and try to draw something that you’ve got a vivid representation of in your mind, but it just doesn’t look quite right can be frustrating. Product design and development firm, Cambridge Consultants, is leveraging machine learning to complete and enhance a drawing that’s been started with a human sketch. ‘Vincent’ the breakthrough ML combines a ‘user's’ sketch with the digested sum of art since the renaissance, as if Van Gogh, Cezanne and Picasso were inside the machine, producing art to order.’
Last week at the Deep Learning Summit in London, we were lucky enough to experience first hand the work that Cambridge Consultants are doing with Vincent, their ‘deep learning demonstration which builds on human input to create completed works of art.’
Watch the demo of Vincent here:
Machine learning has made its way into the arts previously, and Jukedeck who have built a platform that is able to compose original music using neural networks, are providing industry solutions for companies or individuals in need of soundtracks, jingles, and theme tunes. The neural networks are trained using notes and chords on midi, taking in short sequences of music and predicting, with some accuracy, the notes or chords that should come next in the sequence. Founder and CEO, Ed Newton-Rex who spoke at the DL Summit last week touched on the scepticism of many people in bringing machine learning into the arts:
A lot of people still believe music is about emotion and memory. To what extent will machines be able to touch emotion in music or anything else? Can the machine understand emotional content and frame that to an experience?
Of course this can similarly be applied to art, and Cambridge Conusltants' Machine Learning Director Monty Barlow, explained that although AI has been able to compose music and equally create an image or song based on sounds or pictures it’s seen before, it ‘has never had the ability to interpret what a human is drawing and then complete the piece for them. Beyond simple machine-generated art, Vincent is an engaging, interactive system in which the output is guided and influenced by the user.’
At the Summit, the machine learning team set up Vincent and allowed attendees to become talented artists, watching their sketches transform into works of art. They explained to attendees that the AI applies the history of human art and uses generative adversarial networks (GANS) to build an understanding of contrast, colour and texture, based on the thousands of works that the system was trained on. Monty said that ‘what we’ve build would have been unthinkable to the original deep learning pioneers - by successfully combining different ML approaches such as adversarial training, perceptual loss, and end to end training of stacked networks, we’ve created something hugely interactive, taking the germ of a sketched idea and allowing the history of human art to run with it.’
We watched attendees drawing on the tablet and experienced Vincent interpreting the lines and sketches to build out the piece of art. The output of each work of art was distinctly varied and in keeping with the style that each user started off using. These relevant, finished pieces of art were not only exciting for us to see, but it became apparent how this piece of AI could be applied in industries far beyond art with the design of autonomous vehicles, and digital security. The technology would be able to create training scenarios and simulations with almost limitless variation and convincing detail beyond what humans could produce.
'We’re exploring completely uncharted territory – much of what makes Vincent tick was not known to the machine learning community just a year ago' said Barlow. 'We’re excited to be at the leading edge of an emerging, transformative industry and to be making the leap from the art of the possible to delivering practical machine learning solutions for our clients.'