Algorithmic Trust

People today entrust decisions to algorithms more than ever before.

We could ask for a recommendation or do our own research, but it’s almost always easier to pick from a list of suggestions. If those suggestions work out, we build trust in that source of those suggestions over time.

Unlike a genuine recommendations from a real person, suggestions can be easily automated.

That’s probably okay for some things where the stakes are low, but probably not for matters of consequence.

A problem arises when a pattern of accepting good suggestions from an algorithm turns into a habit of blindly accepting suggestions. Even if there are duds of suggestions coming from the feed that lead to disappointment, we still scroll on in hopes the next one will be better. They must get better. They used to be better.

You wouldn’t do that if you had a bad recommendation from a server at a restaurant or a sommelier at a wine shop. You might not go back to that person for a recommendation, or better, you might offer them feedback so the next recommendation is more suited to your tastes.

Algorithms eventually run out of home run hits to suggest, and when they do, they seldomly tell you that their suggestions are now garbage. You might not come back if they disclosed this. Instead, your muscle memory of scrolling or reliance on the algorithm keeps you engaged in it.

We could demand better from the implementation of any given algorithm, and if the rest of the community of users agrees with us, the algorithm’s operator might make a change or be more transparent. But, that’s not likely to happen – at least not yet.

All of this sets us up for a battle ahead to rationalize and command more individual agency over the way algorithms impact our lives. How that plays out is being written right now in what will undoubtedly be a very pivotal moment in computer science history.


Posted

in

by

Tags: