Quantcast
Channel: Privacy Archives - Mike McBride Online
Viewing all articles
Browse latest Browse all 160

Algorithms and the Human Inconsistency Problem

$
0
0

The other day, as I was perusing various news sources, I happened upon back to back articles from Psychology Today. Now at first glimpse these articles might not seem related, but I think they are. Let me start by talking about Facebook though.

Way back in his Harvard days, Mark Zuckerberg, the legend holds, started the original social network, Facemash, and shared it among students. It was met with public outcry as being sexist, and invading the privacy of fellow students, and was eventually shut down, but it was also wildly popular in the short time it was live. Zuckerberg learned from that.

I want you to keep that story in mind as we look at a couple of articles:

First, Does My Algorithm Have a Mental Health Problem?

By training algorithms on human data, they learn our biases. One recent study(link is external) led by Aylin Caliskan at Princeton University found that algorithms trained on the news learned racial and gender biases essentially overnight. As Caliskan noted: ‘Many people think machines are not biased. But machines are trained on human data. And humans are biased.’

This is a danger, mostly because what we train algorithms on is what humans actually do, not what we say we want to do. That disconnect is very important, because we are, all of us, flawed. We say we hate click-bait articles, and online trolls, and fake news, but when we react to it, the algorithm sees that and learns from it. Because we all hate click-bait, but most of us probably click on it too.

Of course, online it is relatively harmless. If the algorithm decides I want to see more click-bait, so be it. But, what we do with these algorithms out in the real world is much more dangerous. Humans, in general, have a difficult time determining the difference between causation and correlation. We spend a lot of time, and article space, looking at groups and determining what they have in common, for example, people with diagnosed depression mostly tend to spend less time outdoors. That doesn’t mean simply going outside would cure depression for all of them. It’s just something they have in common, a correlation.

Our human minds are pretty good at noticing patterns and finding correlations. Artificial Intelligence is much, much better at it.

This begs the question, do we allow AI to inform our decisions based on correlations that may be extremely spurious? This is no small question, because there are correlations, for example, between poverty and child abuse, minorities and crime, etc. If the AI is learning from these correlations, and determining the course of action that should be taken, will it simply exacerbate the current problems we have with poor people and minorities being targeted by various enforcement agencies? Will government policy be based on AI decisions that are even more biased than we’ve already been? It’s certainly possible. AI will pick up on our biases much faster than we do.

The second article that caught my eye, and which seems to have no connection at all at first was Why Restaurants Are Detrimental To Your Mental Health. 

Noisy restaurants are disconnecting. I seem to remember when the decibel level got really annoying. It was when some genius, or a gaggle of geniuses, decided that loud meant exciting. People were drawn to where the action is, and hubbub sounds like action.  Right? You already know my answer.

The author then goes on to talk about how she can’t find anyone who likes the noise level in restaurants.

But, we all should know by now that nothing is done at that level by accident. Restaurants, retail shops, casinos, and most everything is designed around data. It’s all been A/B tested to death. We say we want a quiet place to enjoy a meal, but we continue to go to the noisy, exciting places, probably for fear of missing out on something that others are taking part in, or fear of having silent pauses in our conversations.

Now, is the data that points to loud = exciting, wrong? No, it’s not. We do respond to the noise, we are curious about it, and we will check it out. I’m sure there’s a ton of A/B testing that shows exactly that behavior. The problem may, however, be that if everyone is doing the A/B testing that shows more people will do A, what happens to the people who actually did B? Where do they go? As we depend more and more on AI to decide what will appeal to the most people, what happens when everything is the same?

What happens when every website has identified enough of your traits, matched you with other people with similar traits, and shows you the same stuff everywhere you go?

We are still early on the AI era. I’m actually hopeful that AI will help us solve some of the world’s problems by being able to access, index, and analyze more data than we’ve ever had before, but we also need to be wary of it’s limits. We need to be very careful with the connections that we will find in big data, and not assume those connections are the same as causes. We need to be very careful not to lose ourselves in the data.

In short, I’m not worried about SkyNet, I’m worried about how humanity will deal with cold, hard, data. How it will use it to justify decisions, and what that means for our individual freedoms.

And, I’m worried that so many of us seem to act in ways that do not correspond to what we say too. Then again, I suppose we’ve always been concerned about people who “doth protest too much”. AI is just making it more obvious how common that really is.  😉

The post Algorithms and the Human Inconsistency Problem appeared first on Mike McBride Online. If you want to see more like this, consider subscribing to the RSS Feed.


Viewing all articles
Browse latest Browse all 160

Trending Articles