“I’m Just Not Creative” Says the Robot
Aug 3, 2017 / Photography / how to be creative
There was an article on hyperallergenic this past week entitled Humans Prefer Computer-Generated Paintings to those at Art Basel. The article talked about a Rutgers study where researchers mixed up paintings made by Artificial intelligence (AI) and by Abstract Expressionist painters and asked people to choose images they preferred.
People largely preferred the computer-generated ones and thought that they were creative.
I laughed. “Humans could be fooled” was my first thought. How could they not know?
And then I thought, some artists must be threatened by this. There would even be more competition. Artists will now be competing with robots for an audience!
So how did the AI learn how to create images?
According to the article, the researchers had the AI “study” 80,000 WikiArt images of Western paintings from the 15th to the 20th century so that it could learn what has been successful in the past.
What can we learn from this study when we make art?
The biggest takeaway from this study is to reaffirm the idea that creativity can be learned. How many times have I come across people who tell me, “I’m just not creative. I can’t do what you do.” Hogwash! If a computer could do it, then anybody could. I think most people just don’t -let- themselves be creative because of fear: fear of looking ridiculous, fear of failure, fear of whatever.
The second takeaway from this study is that to learn something new, you must study what has come before. You can’t create art in a vacuum. You have to immerse yourself in all the art that has been made. In a way, it follows what Isaac Newton said about achievement:
“If I have seen a little further, it is by standing on the shoulders of Giants.”
That’s why education in our field of study is important. It’s not about forcing us to learn boring stuff. It’s about surveying our chosen field so that we don’t think we’re at the forefront when in fact we’re just repeating what has already been done.
The last thing I learned from this article is that the meaning of an artwork ultimately resides with the viewer. The machine in this study couldn’t have “thought of” the meaning of the images it was creating (I don’t think we are there yet in AI development).
When viewers look at a piece of art, they manufacture meaning. They make up a story about what the art means even though the artist (or in this case the machine) was not imparting any meaning to start with. This is a very interesting realization. Viewers attribute meaning where no meaning exists. This tendency speaks to our search for meaning in anything we do, doesn’t it?