Skip to main content

The Evolution Of Tech Culture


Photo by Skitterphoto
The culture associated with technology has a checkered past but maybe not in the way you think. Before it became socially acceptable to tote your pocket supercomputer around, why was technology culture anti-social? Are we more social now, or less?

Ars Technica recently interviewed Clive Thompson for his upcoming book Coders: The Making of a New Tribe and the Remaking of the World.Thompson specifically focuses on the origins of the culture of programmers, and there are some interesting divergences from the culture as it is today. Traditionally, software programmers are stereotyped but Thompson debunks these myths.

Rather than being purely anti-social, programmers are merely intensely focused problem solvers. Programmers will spend many hours trying to fix something, which can be frustrating, but they are a rare breed equipped to handle frustration. Programmers solve hard problems, despite frustration, because this is what they enjoy doing. There is a cost associated with this, however, and that is how they can be perceived as anti-social. Frustration often leads to less desirable personality traits becoming more dominant.

Programmers, according to Thompson, are quite good at dealing with frustration because they are actually chasing the redemption of solving hard problems. The disconnect occurs when programmers treat all problems as they treat technical problems. This lack of distinction causes programmers to lose sight of the social world, the world where they're dealing with people rather than computers.

As artificial intelligence ramps up, there is a lot of discussion around the veracity of the code which comprises algorithms. Algorithms have come under scrutiny because inherent biases by those who coded them has become clearer as we have all become more prolific consumers of code. Thompson points out how the origins of software development were female. In the early days of computing, developing hardware was the where the men gravitated. Software development was viewed, according to Thompson, to be near clerical work in its perceived importance. As the importance of software has risen, so too has its shift towards male dominance, and hence, male bias.

To much chagrin, the role of women in computer science has greatly diminished since the early days of Grace Hopper. As more men entered the field, the notion of being a good culture fit took hold, rather than being a meritocracy of organization and rationality, for example. The influx of men into software development shifted the culture to be more like the stereotype: young, male, gifted and white. This is slowly changing, although it is still required to be gifted, the other characteristics are proving to be harmful in light of the growing scrutiny on code, specifically artificial intelligence algorithms.

To be clear, there is nothing wrong with being a young, white, male software developer, but this audience is not the only one who will be affected by software. Additional perspectives and demographics need to be considered during software creation, beyond those of its creators.

The most fertile ground for examining failed algorithms is currently an examination of social media. Posting on Medium, Michael K. Spencer investigates how the algorithms of social media have lead to its demise. The logic behind Spencer's arguments is now all too familiar: social media is driven by engagement, engagement is driven by advertising which in turn generates profits. Engagement is dependent upon algorithms arousing our emotions, which usually means we are largely shocked, angry, or sad.

It is true that many people have a limited interaction with social media, using it only to keep up with family and friends. However, Spencer argues that social media depersonalizes human interaction and that substituting it for real human interaction is a bad choice. To further this point, Spencer links to another article on healthline.com which contends that increased social media usage leads more people being sad and lonely. Why social media use is not necessarily positive is a difficult question, but as we start to investigate it we have turned to the same mindset of software programmers. Spencer points the finger at advertising and western social media apps specifically, but I do not think those are the only causes.

The difficult question that social media is trying to solve is how people can be engaged and connected to an ever expanding world. When it created an efficient process for humans to process an ever expanding set of roles in the world, it necessarily reduced our actual involvement in each role. The trade-off was supposed to be a reduced but still meaningful involvement which maintains the authenticity of interaction. Clearly, this has not held up. We sought a technical answer to a social problem, and I am not the first to say that it isn't going well. Social media has failed so badly, Spencer argues, that people are leaving it for more traditional video-based interaction in droves.

Are other areas of technology susceptible to the same maladies as social media? Let's examine other areas of technology where human beings have rich interactions with one another to gain insight. Linux is the largest open-source project in the world, with thousands of contributors. Each contributor is participating socially as well as technically. Pulling all the contributions to Linux into one cohesive project is a daunting task. When computers encounter unnecessary instructions, they are cast aside without regard. Linux development has been criticized for behaving in similar ways, which obviously created problems.

Linus Torvalds is the creator of Linux, who in the recent past has been criticized for his rather harsh handling of contributors to his project. So harsh were his interactions that it pushed many people away from contributing. Ultimately, Torvalds was requested to take a break from his role to re-assess how he could better handle his interactions with other humans on his project.

Speaking to Linux Journal, Linus Torvalds gave an interview twenty five years after his first interview with the publication. Torvalds is now able to reflect on the past, which offers valuable insights to how the culture has changed. He reflects on his old ways, how he used to state that his goal is "world domination", for example, and states how he has matured past them. He acknowledges how his role has changed, and how he has changed with it. Additionally, Torvalds has expressed regret on his previous style of management and has said that he is aiming to do better in the future. As Torvalds changes I wonder how the rest of us have yet to cross similar bridges in our own digital lives. I think the question is not if we will all get to such a point, but when.

While we privately grapple with how to conduct our digital lives in better ways, we are all publicly grappling with advances in technology and the additional challenges it brings. Artificial intelligence brings the possibility of enormous advancement for humanity, but also enormous responsibility. Heavily reliant on algorithms, artificial intelligence created by software developers subject to the same traits as the developers themselves. Traits like being organized and rational are good much the time. Biases and ruthless efficiencies in problem solving are often far less beneficial.

Fortunately, humanity seems to be learning from the past. The European Union recently published a set of guidelines on how stakeholders should develop ethical applications of artificial intelligence. The EU convened a group of 52 experts to derive seven key requirements which they think AI systems should meet in the future, and they are as follows:

  • Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  • Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
  • Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
  • Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
  • Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  • Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.
These requirements are somewhat subjective, but this framework is far superior to not having one at all. Efforts such as this are often evolutionary, at least now we have a starting point. Software developers are often a surprisingly refreshing group of people but their creations must not be immune from scrutiny, especially as software becomes an ever larger portion of our lives. I am happy to see ethical considerations becoming part of the development process, though it is unfortunate that we had to learn from rather notorious failures of the recent past. Code exists only to do what it was programmed to do, it is up to humans to decide how code executes, what it is executing upon, and why it generates its results. Much like psychology evolved to incorporate human review boards for experiments, computer science needs an equivalent for code.

--Jay E. blogging for digitalinfinity.org

Comments

Popular posts from this blog

The Growing Disruption Of Artificial Intelligence

Photo by Frank Wang Artificial intelligence may be as disruptive as the computers used to create it once were, and it could be even bigger. Given the disruption that social media has proven to be, one has to wonder if we are fully prepared for the life altering consequences we are building for ourselves. IBM has been a key player in the artificial intelligence arena for over two decades. Deep Blue was their first tour de force in 1997, when its team of developers received $100,000 for defeating chess champion Gary Kasparov in a game of chess. That watershed moment has its roots all the way back in 1981 when researchers at Bell Labs developed a machine that achieved Master status in chess, for which they were awarded $5000. In 1988, researchers at Carnegie Melon University were awarded $10,000 for creating a machine that achieved international master status at chess. Deep Blue, however, was the first machine to beat the world chess champion. Google has entered the fray as well,

Workplace Privacy

Photo by Philipp Birmes Americans who believe that our rights are unalienable would be surprised to learn how limited in scope they are at their place of employment. At work, our liberties are second to the need for business to monitor their assets, including their greatest asset, their people. While it is not unreasonable for businesses to be secure, they must tread carefully to avoid violating the rights of their employees. The story of Theranos, the now defunct blood testing company which has since been revealed to be a total fraud, is not new, but many new details are now emerging. Theranos was a silicon valley wunderkind because it was poised to revolutionize the blood testing industry under the leadership of its charismatic leader, Elizabeth Holmes. Holmes made many unethical business decisions, but how was this massive fraud initially discovered? One detail about Theranos that has recently emerged is how paranoid senior leadership was. Holmes had made a connection to a f

On Homelessness

Photo by Quaz Amir It started yesterday, after work as I left my building, I saw them walking. A couple, hauling their belongings in a few neatly stacked boxes that looked like tackle boxes tied to a small luggage cart. The man had crossed the street, along with his dog who stayed faithfully by his side. An older woman was stuck at the intersection waiting for cars to stop. Before long, the cars did stop, she joined her partner, and I didn't spend much more time thinking about them that day. At my job today, I had a great morning. A coworker gave me a great idea for a quick but useful project, which I was able to finish before noon. I feel I am at my best when I am able to be productive. It gives me a sense of purpose for lack of a better word. Feeling good about myself, I set out to buy myself a hamburger for lunch and skip the more healthy option that I brought from home. I drove the short distance to the hamburger joint, the epitome of laziness. As I drove up, I saw the