Skip to main content

The Growing Disruption Of Artificial Intelligence

Photo by Frank Wang
Artificial intelligence may be as disruptive as the computers used to create it once were, and it could be even bigger. Given the disruption that social media has proven to be, one has to wonder if we are fully prepared for the life altering consequences we are building for ourselves.

IBM has been a key player in the artificial intelligence arena for over two decades. Deep Blue was their first tour de force in 1997, when its team of developers received $100,000 for defeating chess champion Gary Kasparov in a game of chess. That watershed moment has its roots all the way back in 1981 when researchers at Bell Labs developed a machine that achieved Master status in chess, for which they were awarded $5000. In 1988, researchers at Carnegie Melon University were awarded $10,000 for creating a machine that achieved international master status at chess. Deep Blue, however, was the first machine to beat the world chess champion.

Google has entered the fray as well, with their development of AlphaGo, a machine that can beat the best players in the world at the ancient game of Go. Playing a human game is certainly an interesting feat for a machine, but by itself the long term usefulness of this alone is merely a novelty. However, IBM has used the knowledge gained from its endeavors to build Watson. Watson like an evolved search engine; rather than scraping content from the web and categorizing it for retrieval, Watson functions quite differently.

Watson is a question-answering system capable of answering questions posed in natural language, which means you can ask your question as if you were talking to a librarian rather than a computer. Watson is also able to extrapolate knowledge that it may not have from its existing knowledge. A simple way to think of Watson is like the Google search engine that can automatically extend its depth of knowledge based what it is asked to do. Watson is able to make decisions where Google is only able to retrieve information it has previously indexed.

Development of AI has reached a critical mass. What was once the science fiction of computer science has become one of its hottest fields. Recently, IBM Debater, an AI system, took on Harish Natarajan, the grand finalist at the 2016 World Debating Championships. IBM Debater did not win that challenge, but it has won against other human opponents.

Debate is a nuanced human endeavor, to win takes more than regurgitating facts. The facts must be presented in a logical way that also persuades people, that changes people's minds. This is far more complex than a game of chess. Watson won handily on the game show Jeopardy in 2013, but that was almost an exercise in simply one-upping Google's search engine. IBM Debater marks the entry of AI into highly organized thought once solely the domain of humans.

Artificial intelligence has advanced so rapidly in the last 20 years that now we are not very surprised to hear that AI is taking menial tasks off our proverbial plates. When we enter an address to which we would like to drive into our smartphones, and we're forced to take a detour, we give no thought when the device automatically detects that we changed course and adjusts our route accordingly. The growing concern is the entrusting of decisions to systems that we created but that we do not ultimately control.

One example, if a self driving car is faced with the difficult decision of either killing a pedestrian or killing the passenger. Technology continues to radically reshape the human experience and ethical considerations need to be part of the discussion. Social networks are a perfect example of a technology that was released into the world a bit prematurely, because we did not fully examine the ethical, social and psychological ramifications of the functions they provide. We are still dealing with this poorly planned deployment to this day, nearly 20 years since social media obtained critical mass. The disruptions to the human experience are exponentially more severe with artificial intelligence and we need to move far more cautiously than we have in the past.

--Jay E. blogging for digitalinfinity.org

Comments

  1. This article shows how AI can be biased based off the data it consumes: https://futurism.com/the-byte/biased-self-driving-cars-darker-skin

    ReplyDelete

Post a Comment

Popular posts from this blog

The Evolution Of Tech Culture

Photo by Skitterphoto The culture associated with technology has a checkered past but maybe not in the way you think. Before it became socially acceptable to tote your pocket supercomputer around, why was technology culture anti-social? Are we more social now, or less? Ars Technica recently interviewed Clive Thompson for his upcoming book  Coders: The Making of a New Tribe and the Remaking of the World .Thompson specifically focuses on the origins of the culture of programmers, and there are some interesting divergences from the culture as it is today. Traditionally, software programmers are stereotyped but Thompson debunks these myths. Rather than being purely anti-social, programmers are merely intensely focused problem solvers. Programmers will spend many hours trying to fix something, which can be frustrating, but they are a rare breed equipped to handle frustration. Programmers solve hard problems, despite frustration, because this is what they enjoy doing. There is a...

An Algorithmic Life

Photo by sk Data is the most valuable commodity of the 21st century. Algorithms are what transform data into information. Algorithms have become like a trusted friend whose recommendations we seek, and that we adhere to. Perhaps what isn't known is how these pieces of code are able to derive such useful information for us, which is the part of algorithms that are unseen to many. An algorithm is ultimately only as good as the data that is fed into it, and we are all feeding vast amounts of data into code we did not author, that we don't control, and that is only visible to us in its outputs. The convenience provided by algorithms is certainly welcome, but according to a recent Pew Research Center report , the public doesn't have such a welcoming opinion of them when used for decisions that can be life-changing. Algorithms represent far more than recommendations on which media to consume. There is an innate desire for humanity in decisions that could dictate, for examp...

Workplace Privacy

Photo by Philipp Birmes Americans who believe that our rights are unalienable would be surprised to learn how limited in scope they are at their place of employment. At work, our liberties are second to the need for business to monitor their assets, including their greatest asset, their people. While it is not unreasonable for businesses to be secure, they must tread carefully to avoid violating the rights of their employees. The story of Theranos, the now defunct blood testing company which has since been revealed to be a total fraud, is not new, but many new details are now emerging. Theranos was a silicon valley wunderkind because it was poised to revolutionize the blood testing industry under the leadership of its charismatic leader, Elizabeth Holmes. Holmes made many unethical business decisions, but how was this massive fraud initially discovered? One detail about Theranos that has recently emerged is how paranoid senior leadership was. Holmes had made a connection to a f...