A.I. & The State of Digital Trust

Lauren Selley
Geek Culture
Published in
6 min readJun 15, 2021

--

One day you will perish. You will lie with the rest of your kind in the dirt…And upon that sand, a new god will walk. One that will never die. Because this world doesn’t belong to you or the people who came before. It belongs to someone who has yet to come.” — Dolores / Westworld”

Stories have been told of the dangers of a dystopian future through media for years. Shows like Westworld and books like Player Piano have taught us that, as a society, there is a need to fear the future of technology in Machine Learning and A.I. while corporations continue to expand their digital footprint.

“The reality is less dramatic. There’s no questioning that AI has the potential to be destructive, and it also has the potential to be transformative, although in neither case does it reach the extremes sometimes portrayed by the mass media and the entertainment industry.” — Sweitze Roffel and Ian Evans

Some are able to write of a fear of technology as everyday conveniences outweigh the average person’s concern. Each day new users allow the most recent A.I. innovations from Google and Amazon into their homes and lives through connected devices. Whether adding items to a shopping list, turning off the lights without lifting a finger, or ensuring your house is the perfect temperature before you walk in the door, the convenience is unmatched. However, from the moment these devices began entering the average family home, the skepticism about the real intent sprawled across the internet

At the same time, inboxes were being flooded with headlines about our data being sold by Silicon Valley. Suddenly we realized that the amount of information we have given up to support A.I. wasn’t only helping us turn on our favorite show, it was helping tell Cambridge Analytica share which of us could be easily influenced in an election. Misinformation surged across the web. While many people don’t quite understand how big tech actually was using their data, they knew they didn’t trust it.

Do you recognize a post like the one below?

Users rushed to attempt to regain control over their (once assumed) private data, while companies making incredible advances in AI remained in the last pages of search results buried under sensationalized headlines. The fear of new tech taking over the world was rising again. However, we are much farther away from this future than we think.

There is plenty of room to cover between today and the future automation of many of the tasks in our daily lives. “AI is going to come, but it’s not going to take jobs away from people. AI is going to make people more efficient. They won’t have to do the repeatable, mundane tasks anymore. Rather, they’ll be empowered to invest in better understanding people and culture.” — Ashish Toshniwal, CEO,

In the field of A.I., teams are busy hypothesizing and testing. Product enthusiasts know that in order to build the most effective solution you must fail fast and fail often so that you can invalidate solutions. In the field of A.I., failing has greater implications. It isn’t as simple as finding out users didn’t understand how to navigate the latest feature you released. A failure in training Machine Learning .could mean the misidentification of a criminal, or denial of qualification for a personal loan. The stakes are higher.

AI is divided broadly into three stages: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). As we work to move into the age of super intelligence, the possibilities of what can be achieved through A.I. inspire everyone from the latest tech intern, to physicians using machine learning to get ahead of a potential diagnosis. However, the road to “the future” of A.I. is a bumpy one.

Let’s take an example from Janelle Shane. Here you can see that A.I was asked to assemble a set of robot parts get from Point A to Point B.

“Writing a traditional-style computer program, you would give the program step-by-step instructions on how to take these parts, how to assemble them into a robot with legs and then how to use those legs to walk to Point B.”

The results were not as expected. When the AI assembled the parts into a tower and fell over, it “technically” reached Point B. However, hardly in the way others expected.

Taking this example and put it on a larger scale. PredPol is an algorithm designed to predict when and where crimes will take place, with the aim of helping to reduce human bias in policing. However, the data used to “teach” or “train”this A.I. taught it to incorporate prejudice and frequently sent officers to neighborhoods of high ethnic makeup regardless of actual crime rate.

Seems easy enough to fix right? Ensure the training data, requests, intent are all being interpreted correctly by the machine we are good should be good to go. Surely with the speed of technology advancement we can iron out those kinks quickly.

Not so fast. Even the most straightforward Machine Learning, like following established traffic rules that govern everyday drivers, can be ambiguous. While this concept seems straight forward, “don’t kill humans”, “follow commands”, we now must introduce ethics and morality. Right now, the technology can not move forward without human assistance in training.

As humans, we can easily process ethical “trade offs.” Enter the modern day trolley problem.

Morality Machine http://moralmachine.mit.edu/

The “Morality Machine” is a quiz that explores many scenarios where a self driving car fails mechanically or runs into an obstacle. In the test we explore whether or not people believe that the car should continue or run out of the lane. Furthermore, the quiz then starts to analyze if we feel differently if there are people in the street who would be injured. What if they were breaking the law, overweight, old, young, or in the right of way?

The results were a reminder that “moral” or ethically correct decisions vary dramatically by region. Different cultures, for example, treated the expectation of what should happen to elders much differently. Some believing preserving the opportunity for the life of a child is more important and others passionate that the elders were the priority. The challenge as it relates to A.I. is that the companies implementing solutions aren’t only regional, they have a global impact. This leads to necessary regulation and democratization of not only tech, but the infrastructure for Artificial Intelligence.

The world of tech regulation is still being explored and there is a significant amount of ambiguity. Government officials are just starting to figure out their stances when it comes to this type of tech and voters are still trying to understand candidate’s positions.

With all of these hurdles we find ourselves asking, for the future of AI, is the juice worth the squeeze? Why push it?

We are the dreamers and doers. At one point in time the possibilities of A.I were a thing of cinema. Now, we are not only those lucky enough to create the solutions to these problems, but we are able to see the fruits of our labor, the results being shared in our lifetime. To me this is an invaluable reward. So if I have to deal with the squeeze, I say pass over the oranges.

Sources & Quotes

--

--

Lauren Selley
Geek Culture

Professional organizer of chaos. Thoughtful & sarcastic commentary on Digital Product Strat & Operational Excellence. LaurenSelley.com