Society & AI: Sabotage

Written on January 23, 2019

From January to April 2019 I am involved in a seminar course on the Socio-Cultural and Political Implications of Artificial Intelligence (ARTS490). This article is part of a series of essays written for the seminar on AI’s implications for society.

I. Work

When I tell people that I am researching artificial intelligence, the first response people have is generally, “are you going to take all the jobs away?” Work is highly intertwined with the automation of tasks by computers. As part of grappling with this concern, our seminar heard from David Jones, the Program Manager leading Microsoft’s Envisioning initiative. He is interested in how we will be personally and collectively productive (i.e. work) in the next ten years.

Why might the future of AI and work matter to one of the world’s most valuable companies? In 2018 they restructured their entire organization around investing in two things: cloud computing and artificial intelligence. Today they have grown their artificial intelligence and research division to 8,000 employees. All this experience in the research and productionalization of artificial intelligence makes their perspective on the future of work worth listening to. David Jones has encapsulated this perspective in a video series titled The Changing World of Work.

II. Sabotage

In writing stories of a world where humans work alongside artificial intelligence, it is the humans who are often framed as the protagonists. In one of the short films Jones created humans make use of interactive devices that speed up their ability to make decisions and enhance their skills.

Sometimes our representation of the future is darker. We style ourselves as combatants against a screen of endless characters. Or at the very least as being taken advantage of in the relationship between us and the technology we use. Perhaps it is not that simple.

matrix poster her poster

When I look at the work we do as a society, I wonder if we are sometimes the antagonists in the story. Subverting some of our collective needs in an attempt to meet individual ones. One of the Microsoft’s interviewees reflected this:

“In one recent french study, a chief executive said that only 11% of his workforce were really excited and enthused. 70% found it was just a way to earn money to keep their family alive. And 19% were actively prepared to sabotage the organization they disliked it so much.” - Charles Handy (The Changing World of Work)

If current work does not give all of humanity satisfaction, can we really expect intelligent machines to change that?

III. Learning From Ourselves

Here I’d like to argue that the current methods used in machine learning are not leading us towards a future where humans find greater satisfaction in work or transcend it completely. Take two examples:

  1. Learning what objects look like (ImageNet)
  2. Learning to answer questions (SQuAD)

Both of these are tasks that we’ve trained computers to perform with near-human accuracy. However, the computers exhibit interesting failure cases that are sometimes the result of dataset bias and adversarial techniques. Showing a computer thousands of examples of an object does not teach it to understand the object. Much recent success in task learning has come from increasing the size of networks and their inputs (dataset sizes), but that also means relying on data that is not as well vetted.

If we extrapolate these simple tasks into a full work day or a year-long project, will we see these biases and breakages compounded? If computers are learning how to do our work, will we be showing them the 11% of workers who are enthusiastically doing their work or the 19% who are trying to sabotage the organization? Learning from our own work might be a dangerous proposition. Our current work may be sabotaging the future of work.

IV. Collaborators

The envisioning series has something to say about this issue too, although not explicitly. It explores how human collaboration networks will change to be less geographically and organizationally based. There is a vision of more responsibilities and less hierarchical organizational structures. This is a vision that machine learning research can learn from as well.

I’d like to imagine what it is like for AI to learn in collaboration with humans. A machine learning model should recognize the underlying tasks we are attempting to perform and learn from our approaches, but also have the freedom to explore novel solutions the same way we do. I wonder if the current methods of evaluating models with strongly structured datasets and single number metrics are hindering our ability to design computers who are good explorers as well as workers.

One of my favourite examples of a collaborative relationship between AI and humanity is in the game Halo (also, incidentally, owned by Microsoft). An in-game AI named Cortana helps the player through levels with advice and occasional intervention. As the game progresses both Cortana and the player learn about each other, and from each other. It’s a relationship that helps both to grow. I think this is the most compelling vision of the future of work that Microsoft has cast, and I hope that future envioning projects will lean more in that direction.

There should be less of a protagonist/antagonist dichotomy in our view of artificial intelligence. There should be less relegation of machines to mundane tasks. We must expect computers to be our equals. Only then can we stop humanity from sabotaging ourselves.