Skip to main content

The Growing Disruption Of Artificial Intelligence

Photo by Frank Wang
Artificial intelligence may be as disruptive as the computers used to create it once were, and it could be even bigger. Given the disruption that social media has proven to be, one has to wonder if we are fully prepared for the life altering consequences we are building for ourselves.

IBM has been a key player in the artificial intelligence arena for over two decades. Deep Blue was their first tour de force in 1997, when its team of developers received $100,000 for defeating chess champion Gary Kasparov in a game of chess. That watershed moment has its roots all the way back in 1981 when researchers at Bell Labs developed a machine that achieved Master status in chess, for which they were awarded $5000. In 1988, researchers at Carnegie Melon University were awarded $10,000 for creating a machine that achieved international master status at chess. Deep Blue, however, was the first machine to beat the world chess champion.

Google has entered the fray as well, with their development of AlphaGo, a machine that can beat the best players in the world at the ancient game of Go. Playing a human game is certainly an interesting feat for a machine, but by itself the long term usefulness of this alone is merely a novelty. However, IBM has used the knowledge gained from its endeavors to build Watson. Watson like an evolved search engine; rather than scraping content from the web and categorizing it for retrieval, Watson functions quite differently.

Watson is a question-answering system capable of answering questions posed in natural language, which means you can ask your question as if you were talking to a librarian rather than a computer. Watson is also able to extrapolate knowledge that it may not have from its existing knowledge. A simple way to think of Watson is like the Google search engine that can automatically extend its depth of knowledge based what it is asked to do. Watson is able to make decisions where Google is only able to retrieve information it has previously indexed.

Development of AI has reached a critical mass. What was once the science fiction of computer science has become one of its hottest fields. Recently, IBM Debater, an AI system, took on Harish Natarajan, the grand finalist at the 2016 World Debating Championships. IBM Debater did not win that challenge, but it has won against other human opponents.

Debate is a nuanced human endeavor, to win takes more than regurgitating facts. The facts must be presented in a logical way that also persuades people, that changes people's minds. This is far more complex than a game of chess. Watson won handily on the game show Jeopardy in 2013, but that was almost an exercise in simply one-upping Google's search engine. IBM Debater marks the entry of AI into highly organized thought once solely the domain of humans.

Artificial intelligence has advanced so rapidly in the last 20 years that now we are not very surprised to hear that AI is taking menial tasks off our proverbial plates. When we enter an address to which we would like to drive into our smartphones, and we're forced to take a detour, we give no thought when the device automatically detects that we changed course and adjusts our route accordingly. The growing concern is the entrusting of decisions to systems that we created but that we do not ultimately control.

One example, if a self driving car is faced with the difficult decision of either killing a pedestrian or killing the passenger. Technology continues to radically reshape the human experience and ethical considerations need to be part of the discussion. Social networks are a perfect example of a technology that was released into the world a bit prematurely, because we did not fully examine the ethical, social and psychological ramifications of the functions they provide. We are still dealing with this poorly planned deployment to this day, nearly 20 years since social media obtained critical mass. The disruptions to the human experience are exponentially more severe with artificial intelligence and we need to move far more cautiously than we have in the past.

--Jay E. blogging for


  1. This article shows how AI can be biased based off the data it consumes:


Post a Comment

Popular posts from this blog

Law enforcement and DNA sequencing

DNA sequencing has risen in popularity in recent years to to the widespread availability of affordable testing kits. Obviously people are opting into participation by uploading their DNA data, in great numbers, but do they fully know how that data will be used?

The Golden State Killer, who terrorized California from the mid-1970s to the mid-1980s, was recently apprehended by working with a lesser known testing company, Family Tree DNA. Their kits are smaller and so is their database, but their database has an big advantage. The company boasts that it has the largest database Y-DNA database in the world. Y-DNA is very useful in tracing patrilineal ancestry, which is essentially data on who you are related to. This data is how the Golden State Killer was caught. Because some of his relatives had willingly participated in DNA kit testing, law enforcement was able to triangulate his identity.

Use of these databases by law enforcement is a new but already rapidly growing phenomenon. Gene …

Operator Overload

Photo by Oliver Sjostrom
Life has changed dramatically since the start of the personal computer revolution in the late 1970s. We have seen computing go from the realm of military to industry, then to the home and now to the pocket of our pants. Connected computers have followed the same path, as the Internet has forever changed the landscape of computing, including how people interact with it. Along the way, we've seen computing go from being rather anti-social to being a completely mainstream component of popular culture.

Let's pause for a moment and examine how technology migrated into being chic. In the late 1970s there was a lot of optimism around what computing technology could someday do for us and while many people were eager to learn, that number was still small. Computers were essentially souped up calculators and most people weren't eager to learn an arcane programming language or spend their free time noodling around with calculations.

One pivotal use case for t…