Pensar y reflexionar para cambiar

Project Play Aspen Mexico

How algorithms are controlling your life

Algorithms are a black box. We can see them at work in the world. We know they’re shaping outcomes all around us. But most of us have no idea what they are — or how we’re being influenced by them. Algorithms are invisible pieces of code that tell a computer how to accomplish a specific […]

Algorithms are a black box.

We can see them at work in the world. We know they’re shaping outcomes all around us. But most of us have no idea what they are — or how we’re being influenced by them.

Algorithms are invisible pieces of code that tell a computer how to accomplish a specific task. Think of it as a recipe for a computer: An algorithm tells the computer what to do in order to produce a certain outcome. Every time you do a Google search or look at your Facebook feed or use GPS navigation in your car, you’re interacting with an algorithm.

A new book by Hannah Fry, a mathematician at University College London, argues that we shouldn’t think of algorithms themselves as either good or bad, but that we should be paying much more attention to the people programming them.

Algorithms are making hugely consequential decisions in our society on everything from medicine to transportation to welfare benefits to criminal justice and beyond. Yet the general public knows almost nothing about them, and even less about the engineers and coders who are creating them behind the scenes.

I reached out to Fry to talk about how algorithms are quietly changing the rules of human life and whether the benefits of algorithms ultimately outweigh the costs.

A lightly edited transcript of our conversation follows.

Sean Illing

How are algorithms changing human life?

Hannah Fry

In all sorts of ways, really. From what we choose to read and watch to who we choose to date, algorithms are increasingly playing a huge role. And it’s not just the obvious cases, like Google search algorithms or Amazon recommendation algorithms.

We’ve invited these algorithms into our courtrooms and our hospitals and our schools, and they’re making these tiny decisions on our behalf that are subtly shifting the way our society is operating.

Sean Illing

Do you think our trust in algorithms is misplaced? Are we making a mistake by handing over so much decision-making authority to these programs?

Hannah Fry

That’s a difficult question. We’ve got a really complicated relationship with machines. On the one hand, we sometimes do misplace our trust in them. We expect them to be almost godlike, to be so perfect that we will blindly follow them wherever they lead us.

But at the same time, we have a habit of dismissing an algorithm as soon as it is shown to be slightly flawed. So if Siri gets something wrong, or if our GPS app miscalculates the traffic, we think the whole machine is just rubbish. But that doesn’t make any sense.

Algorithms are not perfect, and they often contain the biases of the people who create them, but they’re still incredibly effective and they’ve made all of our lives a lot easier. So I think the right attitude is somewhere in the middle: We shouldn’t blindly trust algorithms, but we also shouldn’t dismiss them altogether.

Sean Illing

What advantages do we gain by relying so heavily on algorithms?

Hannah Fry

Humans are quite bad at a lot of things. We’re bad at being consistent. We’re bad at not being biased. We get tired and sloppy.

Algorithms posses none of those flaws. They’re incredibly consistent. They never get tired, and they’re absolutely precise. The problem is that algorithms don’t understand context or nuance. They don’t understand emotion and empathy in the way that humans do.

“WE DON’T HAVE TO CREATE A WORLD IN WHICH MACHINES ARE TELLING US WHAT TO DO OR HOW TO THINK, ALTHOUGH WE MAY VERY WELL END UP IN A WORLD LIKE THAT”

Sean Illing

Can you give me an example of an algorithm going disastrously wrong?

Hannah Fry

I write in the book about Christopher Drew Brooks, a 19-year-old man from Virginia who was convicted of the statutory rape of a 14-year-old girl. They’d had a consensual relationship, but she was underage and that’s illegal.

During sentencing, the judge in the case relied on an algorithm designed to make a prediction about how likely an individual is to go on to commit a crime if they’re released from jail. The algorithm assessed his score of reoffending, and it determined that because he was such a young man and he was already committing sexual offenses, there was quite a high chance that he would continue in this life of crime. So it recommended that he be given 18 months in jail.

And maybe that’s fair. But it also demonstrates just how illogical these algorithms can be sometimes. Because it turned out that this particular algorithm places a lot of weight on the age of the offender — so if he had been 36 instead of 19, it would’ve deemed him a much lower threat. But in that case, he would’ve been 22 years older than the victim, and I think any reasonable person would consider that worse.

This is an example of how these perfectly logical algorithms can arrive at bizarre results. And in this case, you’d think that the judge would’ve exercised his discretion and overruled the algorithm, but he actually increased Brooks’s sentence, in part because of the algorithm.

Sean Illing

Do you think the people creating these algorithms, the engineers at Google or Facebook or wherever, fully understand what they’re creating?

Hannah Fry

They’re starting to care a lot about the implications of this stuff. Facebook used to have the motto “move fast and break things,” and that was the attitude of much of the tech world. But the tide has shifted in the last couple of years. There’s been a wake-up call for a lot of these people as the unintended consequences of these creations have become much clearer.

Every social media platform, every algorithm that becomes part of our lives, is part of this massive unfolding social experiment. Billions of people around the world are interacting with these technologies, which is why the tiniest changes can have such a gigantic impact on all of humanity. And these companies are starting to recognize this and take it seriously.

Sean Illing

You say that algorithms themselves are neither good nor bad, but I want to push you on this a bit. Algorithms can produce unexpected outcomes, especially machine-learning algorithms that can program themselves. Since it’s impossible for us to anticipate all of these scenarios, can’t we say that some algorithms are bad, even if they weren’t designed to be?

Hannah Fry

That’s a good question. We have to think of these technologies, especially machine-learning and artificial intelligence, as more like the invention of electricity than the invention of the light bulb. By that I mean we don’t know how these things are going to be used and in what situations or what context.

But electricity in its own right isn’t good or bad — it’s just a tool that can be used in an infinite number of ways. Algorithms are like that, too. I haven’t come across an algorithm that was 100 percent bad or good. I think the context and everything around it is the thing that makes the difference.

“PEOPLE CAN MAKE ANY CLAIMS THEY WANT ABOUT WHAT THEIR ALGORITHM CAN OR CAN’T DO, EVEN IF IT’S ABSOLUTE NONSENSE, AND NO ONE CAN REALLY STOP THEM FROM DOING IT”

Sean Illing

Do you worry that the proliferation of algorithms is eroding our ability to think and decide for ourselves?

Hannah Fry

There are places where that is clearly happening, where the role of humans has been sidelined. And that’s a really dangerous thing to allow to happen. But I also don’t think that it needs to be like that. Humans and machines don’t have to be opposed to one another. We have to work with machines, acknowledging that they are flawed, just as we are. And that they will make mistakes, just as we do.

We don’t have to create a world in which machines are telling us what to do or how to think, although we may very well end up in a world like that. I’d much prefer a world in which humans and machines, humans and algorithms, are partners.

Sean Illing

Do you believe that humans and artificial algorithms will eventually combine in ways that blur the distinction between the two?

Hannah Fry

It’s entirely possible, but we are a really, really, really long way away from that.

There is a project, for example, that’s been trying to replicate the brain of the C. elegans worm, which is a microscopic worm with something like 200 neurons in its brain — and we can’t do it. Even with the most sophisticated cutting-edge artificial intelligence, we’re nowhere near being able to simulate the brain of a teeny-tiny microscopic worm. So we’re galaxies away from simulating more complex animals, and even further away from replicating humans.

So these conversations are interesting, but they can also serve as a distraction from what’s going on right now. The rules and systems that govern our lives are changing all around us, and algorithms are a big part of that.

Sean Illing

Do we need a stronger regulatory framework for algorithms?

Hannah Fry

Absolutely. We’ve been living in the technological Wild West, where you can collect private data on people without their permission and sell it to advertisers. We’re turning people into products, and they don’t even realize it. And people can make any claims they want about what their algorithm can or can’t do, even if it’s absolute nonsense, and no one can really stop them from doing it.

And even if a particular algorithm works, there is no one assessing whether or not it is providing a net benefit or cost to society. There’s nobody doing any of those checks. We need an equivalent to the FDA, some agency that can protect the intellectual property of a company that comes up with an algorithm but also ensure that the public isn’t being harmed or violated in any way by it.

Sean Illing

At the end of the day, are algorithms solving more problems for human beings than they’re creating?

Hannah Fry

Yes, I think they’re solving more problems than they’re creating. I’m still mostly positive about this stuff. I’ve worked in this area for over a decade, and there are huge upsides to these technologies. Algorithms are being used to help prevent crimes and help doctors get more accurate cancer diagnoses, and in countless other ways.

All of these things are really, really positive steps forward for humanity. We just have to be careful in the way that we employ them. We can’t do it recklessly. We can’t just move fast, and we can’t break things.

Vía: https://www.vox.com/technology/2018/10/1/17882340/how-algorithms-control-your-life-hannah-fry

ARTÍCULOS RELACIONADOS

How algorithms are controlling your life

30 enero, 2020

How algorithms are controlling your life

30 enero, 2020

How algorithms are controlling your life

30 enero, 2020

How algorithms are controlling your life

30 enero, 2020

¿Quieres estar informado de nuestras actividades?