Data is the most valuable commodity of the 21st century. Algorithms are what transform data into information. Algorithms have become like a trusted friend whose recommendations we seek, and that we adhere to. Perhaps what isn't known is how these pieces of code are able to derive such useful information for us, which is the part of algorithms that are unseen to many. An algorithm is ultimately only as good as the data that is fed into it, and we are all feeding vast amounts of data into code we did not author, that we don't control, and that is only visible to us in its outputs.
The convenience provided by algorithms is certainly welcome, but according to a recent Pew Research Center report, the public doesn't have such a welcoming opinion of them when used for decisions that can be life-changing. Algorithms represent far more than recommendations on which media to consume. There is an innate desire for humanity in decisions that could dictate, for example, whether or not we are being honest, whether or not we are likely to commit a crime, or if we are an acceptable credit risk. Trusting code to make such a decision is met with a high degree of skepticism, especially when we are asking it to account for something uniquely human, like empathy.
Bias is a major concern when evaluating the reach of algorithms. According to the Pew Research Center report, 58% of people surveyed felt that algorithms will always reflect some level of human bias. This speaks to the fairly narrow range of humans, from a demographic sense, who are creating them. Algorithmic creators are overwhelmingly male, and typically white or Asian.
Presumably without knowing this nuance, the majority of those surveyed found it unacceptable to use algorithms in all four scenarios presented in the study. Criminal risk assessments, resume screening of job applicants, video analysis of job interviews, and alternative personal finance scores were all found to be unacceptable by greater than 50%. Several themes emerged during the study regarding why they were viewed as unacceptable. Privacy violation, lack of fairness, removal of the human element, and code not being able to capture a sufficient level of nuance for circumstantial factors were all sited as reasons for the unacceptability of algorithms by the Pew Research study.
Social media is perhaps the most fertile ground for assessing the value of algorithms, which does not look promising. Social media, according to the Pew Research Center, seeks to drive people towards content that is engaging. Engaging content may not be quality content, it could simply be content that makes users "...angry, inflames their emotions or otherwise serves as intellectual junk food."
Another facet of algorithms on social media that further shows their flaws is how social media users will become locked into an echo chamber of people they already know, who are likely of similar demographics, and who share similar world views. Algorithms, in this scenario, are exploiting what is called confirmation bias. Whereby, we seek to have our current beliefs reinforced rather than seeing things in a new light and perhaps changing our mind. When an algorithm is ultimately designed for a commercial purpose, to keep us engaged with social media in this case, their trustworthiness is compromised.
Facebook is no stranger to bad press these days, unless it's earnings season. The value that Facebook provides society, beyond their balance sheet, has come into question for myriad of reasons. The social media giant was recently reprimanded by Apple for running a Virtual Private Network (VPN) which vastly overreached on the amount of personal information it harvested for its algorithms. For handing over data about everything they were doing on their smartphone, far beyond what they were doing only in Facebook, users as young as 13 years old were paid all of twenty dollars per month. Once discovered, public reaction was swift and severe, and the software has since been shuttered entirely.
This example demonstrates how valuable data really is, and how crucial it is to algorithms. Facebook is an enterprise that is entirely dependent on a consistent flow of high quality data about its users and it will go to great lengths to sustain itself. Yet another instance that has recently surfaced is how third party smartphone apps report data to Facebook even if no Facebook account is present on a device, without the users knowledge or consent. Without having any connection to Facebook directly, you are likely still feeding your data into their algorithms.
Ironically, the decision to use algorithms is as nuanced as the decisions we are entrusting them to make. Simply opting-out is not really possible, and even if it were it would be a huge step backwards. We must take ownership of our information and guard it as much as possible. There are many things we can opt-out of that do not handicap the advantages provided by an algorithmic life. For example, we are able to opt-out of the gigantic data brokers like Acxiom, and we should. The notion of our data being our property is already gaining a foothold, and it is my hope that this notion will soon become widespread.
--Jay E. blogging for digitalinfinity.org