Skip to main content

The Growing Disruption Of Artificial Intelligence

Photo by Frank Wang
Artificial intelligence may be as disruptive as the computers used to create it once were, and it could be even bigger. Given the disruption that social media has proven to be, one has to wonder if we are fully prepared for the life altering consequences we are building for ourselves.

IBM has been a key player in the artificial intelligence arena for over two decades. Deep Blue was their first tour de force in 1997, when its team of developers received $100,000 for defeating chess champion Gary Kasparov in a game of chess. That watershed moment has its roots all the way back in 1981 when researchers at Bell Labs developed a machine that achieved Master status in chess, for which they were awarded $5000. In 1988, researchers at Carnegie Melon University were awarded $10,000 for creating a machine that achieved international master status at chess. Deep Blue, however, was the first machine to beat the world chess champion.

Google has entered the fray as well, with their development of AlphaGo, a machine that can beat the best players in the world at the ancient game of Go. Playing a human game is certainly an interesting feat for a machine, but by itself the long term usefulness of this alone is merely a novelty. However, IBM has used the knowledge gained from its endeavors to build Watson. Watson like an evolved search engine; rather than scraping content from the web and categorizing it for retrieval, Watson functions quite differently.

Watson is a question-answering system capable of answering questions posed in natural language, which means you can ask your question as if you were talking to a librarian rather than a computer. Watson is also able to extrapolate knowledge that it may not have from its existing knowledge. A simple way to think of Watson is like the Google search engine that can automatically extend its depth of knowledge based what it is asked to do. Watson is able to make decisions where Google is only able to retrieve information it has previously indexed.

Development of AI has reached a critical mass. What was once the science fiction of computer science has become one of its hottest fields. Recently, IBM Debater, an AI system, took on Harish Natarajan, the grand finalist at the 2016 World Debating Championships. IBM Debater did not win that challenge, but it has won against other human opponents.

Debate is a nuanced human endeavor, to win takes more than regurgitating facts. The facts must be presented in a logical way that also persuades people, that changes people's minds. This is far more complex than a game of chess. Watson won handily on the game show Jeopardy in 2013, but that was almost an exercise in simply one-upping Google's search engine. IBM Debater marks the entry of AI into highly organized thought once solely the domain of humans.

Artificial intelligence has advanced so rapidly in the last 20 years that now we are not very surprised to hear that AI is taking menial tasks off our proverbial plates. When we enter an address to which we would like to drive into our smartphones, and we're forced to take a detour, we give no thought when the device automatically detects that we changed course and adjusts our route accordingly. The growing concern is the entrusting of decisions to systems that we created but that we do not ultimately control.

One example, if a self driving car is faced with the difficult decision of either killing a pedestrian or killing the passenger. Technology continues to radically reshape the human experience and ethical considerations need to be part of the discussion. Social networks are a perfect example of a technology that was released into the world a bit prematurely, because we did not fully examine the ethical, social and psychological ramifications of the functions they provide. We are still dealing with this poorly planned deployment to this day, nearly 20 years since social media obtained critical mass. The disruptions to the human experience are exponentially more severe with artificial intelligence and we need to move far more cautiously than we have in the past.

--Jay E. blogging for digitalinfinity.org

Comments

  1. This article shows how AI can be biased based off the data it consumes: https://futurism.com/the-byte/biased-self-driving-cars-darker-skin

    ReplyDelete

Post a Comment

Popular posts from this blog

Operator Overload

Photo by Oliver Sjostrom Life has changed dramatically since the start of the personal computer revolution in the late 1970s. We have seen computing go from the realm of military to industry, then to the home and now to the pocket of our pants. Connected computers have followed the same path, as the Internet has forever changed the landscape of computing, including how people interact with it. Along the way, we've seen computing go from being rather anti-social to being a completely mainstream component of popular culture. Let's pause for a moment and examine how technology migrated into being chic. In the late 1970s there was a lot of optimism around what computing technology could someday do for us and while many people were eager to learn, that number was still small. Computers were essentially souped up calculators and most people weren't eager to learn an arcane programming language or spend their free time noodling around with calculations. One pivotal use ca

An Algorithmic Life

Photo by sk Data is the most valuable commodity of the 21st century. Algorithms are what transform data into information. Algorithms have become like a trusted friend whose recommendations we seek, and that we adhere to. Perhaps what isn't known is how these pieces of code are able to derive such useful information for us, which is the part of algorithms that are unseen to many. An algorithm is ultimately only as good as the data that is fed into it, and we are all feeding vast amounts of data into code we did not author, that we don't control, and that is only visible to us in its outputs. The convenience provided by algorithms is certainly welcome, but according to a recent Pew Research Center report , the public doesn't have such a welcoming opinion of them when used for decisions that can be life-changing. Algorithms represent far more than recommendations on which media to consume. There is an innate desire for humanity in decisions that could dictate, for examp