How I Learned to Stop Worrying, Part 1: Algorithms Make Us More Human | FirstRain

How I Learned to Stop Worrying, Part 1: Algorithms Make Us More Human

An interesting commentary this week by BBC News’ Jane Wakefield commenting on the increasing influence of algorithms in our daily life has gotten a fair amount of play. And indeed, it’s a useful and thought-provoking piece, if for no other reason that we probably can’t emphasize enough the emerging importance that algorithms are now playing in our economy, society and daily lives.

But where Wakefield’s article falls short is in its vaguely menacing tone, patching together random examples of algorithms at use and the trial and error process of putting them to work effectively—all while implying that the extensive use of these algorithms will lead us to some unfortunate yet unspecified problem (my favorite part was her concern over an algorithm being used to determine a potential movie’s marketability in advance of making the movie—because Hollywood movie producers never made such cynical greenlight decisions before they were corrupted by algorithms …). And even more unfortunate, she completely fails to acknowledge the truly amazing things we can now accomplish because of our ability to employ algorithms.

For every example of an algorithm that has wreaked havoc due to the unforeseen effects of it’s implementation, there have got to be at least 100 examples of the application of algorithms that are making our world tick more effectively every day, like the encryptions needed for secure daily electronic transactions, the design of advanced composite materials and improved aerodynamics for more fuel-efficient cars, optimized telecommunications routing, the analysis of disease genetics, molecular drug design, trip, traffic and shipment routing, computer game design, even the algorithm that apportions the number of U.S. Representatives based on census data.

In other words, smart people writing creative algorithms are the guts and magic that glue together much of the wonder and advancement in our modern life. And while it’s relatively easy to find and get freaked out by isolated examples of algorithms gone haywire, it’s also just as easy to overlook the vast numbers of unseen algorithms powering our world.

And yet, Wakefield somehow makes this seem like a sinister conspiracy, as if algorithms were secretly sentient beings slowly insinuating themselves into our lives in anticipation of the day they rise up and throw down their human overlords. Instead, algorithms are the application of great math in modern technology to help solve problems we wouldn’t otherwise be able to solve. When an algorithm goes wrong in some unanticipated way, it might be amusing, it might be quite serious (e.g. the Flash Crash), but it’s not really a signifier of anything more significant than a program with a bug. Of course, the higher the stakes (e.g., algorithmic trading systems that interact with one another and can move the market as a whole) the more important it is that we’re thoroughly testing and deeply thinking through the algorithms we implement. No systems, human or technological, ever operate without error, and there’s probably a strong argument that our algorithmically based systems operate with many fewer errors than human-controlled systems do.

The challenge, however, is that as humans we’re much more tolerant of human-based errors than we ever are of machine-based ones, even if those machines make far fewer errors. For example, take Google’s driverless car project. These prototypes are now out and running with astounding levels safety and accuracy. And indeed, this is actually the point of the program itself, to avoid the huge number of human-caused auto deaths and save millions of lives every year (this is the number one cause of death of young people) by relying on machines who can react and make these types of decisions much more quickly and accurately than any human. But just one human death by a failure of one algorithm in one car could kill the whole program. Is this logical if the program is saving many more net lives per year? No, but the illusion of human control and superiority is a powerful one.

In reality, however, where we get the most benefit is not from either human or algorithmic control—it’s in the combination of the two. It’s humans figuring out how and when and where and what algorithms to apply in new and creative ways to better our lives. It’s humans acting as a backstop to ensure algorithms are doing their jobs accurately, and catching the instances that require a lifetime of human experience, context and subtlety to truly understand. It’s why there will always be stock trades made by humans as well as machines. In fact, we employ these algorithmic backstops everyday at FirstRain. For example, we have incredibly sophisticated text analytics algorithms that analyze the Business Web content we find, look for a category in our taxonomy to apply to a given article, and if it doesn’t find one sufficiently descriptive, then suggests a new category to create. On the whole, this works incredibly well, and it’s why our taxonomy is so unbelievably granular. But even still, we need to have a team in place that reviews these algorithmic suggestions and can do a human sanity check, lest a “#CharlieSheen” occasionally slips in unnoticed.

Algorithms are tools, and in the end, it’s the use of tools that make us fundamentally human. This is probably why I’m not overly worried about the implications of humans now using the Web to supplement our memories. From the beginning of time, our bodies have evolved along with the technologies we’ve developed to survive and thrive. I’m sure there was handwringing about the loss of body hair as we began to wear clothes, and doomsday projections as our jaw muscles shrank thanks to our use of fire to cook meat. We’re the animals that use tools more than any other, and those tools change us the more we use them. And so as we employ these newest tools, these algorithms, into our daily life in a million new and groundbreaking ways, let’s be sure we’re thoughtful about those implementations, and creative, and far-sighted, and humble, and maybe even a little grateful.

[Stay tuned for Part 2 next week, by my colleague David Cooke, on the big implications of algorithms in enabling ‘just-in-time’ content delivery]