Even if future chatbots are capable of producing accurate information on request, they will also be capable of producing false information when directed to. And either way, they are capable of producing an infinite stream of text that we will have to handle. So what happens when the internet becomes the Library of Babel?
— Lincoln Michel in Counter Craft
We live in a world of algorithms. I learned about algorithms as formulas for solving mathematical problems, but their use can range from recipes to search engines. They essentially are sets of rules to follow. Today, they are best known for sorting through all the material available on the World Wide Web and presenting us with what the rules determine would be of interest to us. That means filtering out other information, and the net result is that we are largely exposed to things that reinforce what we already believe, rather than being exposed to facts and ideas that might lead us to change our minds.
Many people (I almost typed “experts” but too many “experts” are no more knowledgeable than we are) believe that the growth of social media algorithms are largely responsible for the divisions in society today. People no longer listen to those on the other side of an issue, convinced by the stream of posts coming to them that they know the truth. Now, with chatbots relying on ever-more-powerful artificial intelligence, it is becoming even more difficult to discern what is true.
Yet AI can be very helpful. In covering stories, I now rely almost exclusively upon Otter, an app that records and transcribes conversations, interviews, and meetings, rather than taking notes and trying to decipher what I’ve written afterwards. Otter does a pretty good job of turning an audio recording into text, but sometimes, if the speaker is not talking loudly or slowly enough, the transcript delivers nonsensical or hysterically funny lines.
During the Winnisquam Regional School District’s annual meeting on March 25, my transcript of the discussions included the quote, “The word Northfield can kill you.” Normally, by listening to the recording, one can determine what really was said, but in this case, the speaker’s words were unintelligible except for the final part, “Northfield and Tilton.”
I can understand how the AI could “hear” that as “Northfield can kill you.” Years ago, when working at the Bristol Enterprise, a co-worker asked, “Why are you going to Europe?” I heard her question as, “Why are you on the earth?”
It can happen visually, as well. When my daughter, Inga, sent a photo of a blossom that appeared in her yard this past weekend, and asked what plant it might be, we couldn’t quite decide whether it was a hyacinth or something else. At that stage, appearing somewhat crushed and fragile, it was hard to tell. I immediately thought of an app that is supposed to identify plants from their photos (and another that can determine from the song what bird is singing those notes).
Apps can be very helpful, as long as we can double-check what they feed us.
Keep reading with a 7-day free trial
Subscribe to The News Café to keep reading this post and get 7 days of free access to the full post archives.