When I graduated in the late 90s the world was a very different place to how it appears today. It was more equal for a start; the drastic disparity and inequality we see across the world has not been helped, and many say propagated, by the ruthless growth of tech firms such as Google, Facebook and Amazon, who moved into the embryonic internet space before governments and authorities had a chance to regulate them. Creating monopolies fast enabled them to make a fast buck, and take a firmer grip on their Orwellian agendas.
Now that social media has transformed society, in many ways not for the better, the next stage of the technological revolution is just around the corner: the internet of things. Washing machines talking to light bulbs, televisions talking to radios, heating systems talking to your smartphone. The technology is already here, it’s just not widespread yet. But with the dawn of Alexa, and Siri and the countless other AI tools we are freely inviting into our lives, it’s merely a matter of time before artificial intelligence is going to feel as close and intuitive to us, as natural intelligence.
Where does this leave the publishing industry, and more specifically, those gatekeepers at the heart of many commercial decisions which control the day-to-day ebb and flow of books that make it onto our shelves and into bookshops?
One of the first questions we should ask, before we consider the overbaked status of publishing roles as it is today, with internal editors, external editing houses, and literary agents all jostling to stake some kind of value in the gatekeeping process, is not what value they currently add, but whether it is possible for a computer or algorithm to more precisely predict a likely bestseller than a human being.
There are, of course many factors involved in a decision, but at its most fundamental is the decision whether a particular arrangement of words on a page will prove to inspire more people, and sell more copies, than another. That, essentially, is what literary agents try to do day by day. Many in publishing will tell you the decisions are more complicated than that, they need to consider many more factors (the standing of the author, their own network for promotion, the market trends, whether it hits the zeitgeist etc etc), and that the very fabric of literary culture would come crashing down if it were not for their curatorship. However, I wonder whether these additional factors are mere diversions, distracting us from the core of the issue, that will never fade, and often defies trends or zeitgeists.
For example, the biggest selling author in history had her work turned down by many large publishers and gatekeeers in the book industry. Quite simply, many authoritative figures simply got it wrong, refused to take a bet on what turned out to be the most commercial of books, and rumour has it, in a snobby way too. Of course, many agents exist simply, not to proactively curate a list of books that will maintain literary quality, but to optimize the commercial tide of fiction and non-fiction that comes their way, trying to stay on the zenith of the wave to make their margins. But the complexity of evaluating 100k + words on a page is more often than not likely to be prone to human error. Computers and artificial intelligence with their brute force ability to make sophisticated calculations within seconds, can offer us much more in the way of speedier decisions, and more reliable results.
Algorithms already rule our lives. Facebook and Amazon and Google all guard theirs closely, they are the core of their business models. Could there be an algorithm for the perfect novel? Whoever could create it could be the next Amazon, Smashwords or iBooks and, by successfully plumbing the intricate logic of which groupings of words actually drives some books to the top of the charts, could cut a swathe of literary agents from their jobs.