Demystifying and Democratizing AI

Laura B Fox | The Guru's Ghost
5 min readJan 21, 2023

--

Content Warning: This story contains a scene that could be interpreted as a suicide. Please proceed with caution if this is a sensitive topic for you.

Photo 137923138 / Pearl In Shell © Soňa Hamsová | Dreamstime.com

I asked my local chatbot to tell me a motivational story about someone taking a leap of faith. Then I sat back and watched the words unfurl on my screen.

It was a clear day in the spring, and I found myself standing at the edge of a cliff, looking down to admire the view. Feeling the warm breeze rustle my hair, I heard an unexpected voice behind me.

“Beautiful, isn’t it?”

I turned around to find my friend, who I hadn’t seen in years. I was taken aback by the sight of him, particularly the look of determination in his eyes. Every fiber of my being seemed to suggest that he had come to this place with a purpose, and I was curious to find out what it was.

“What brings you out here?” I asked, eager to know the answer.

My friend seemed to take a deep breath and let it out slowly before he began to tell me his story. He had been working in a job he didn’t enjoy for decades, and he was finally ready to make a change. The only problem was that he didn’t know what kind of job would be the right fit for him. He hadn’t gone to college, and he wasn’t sure if he should stay in the same field or start something completely new.

I listened patiently, feeling inspired by his bravery and commitment to start fresh. After some thoughtful consideration, he had decided to take a leap of faith and make a move. Without another word, he stepped forward and jumped off the edge of the cliff.

I watched in amazement as he descended through the air, confident that he was doing the right thing. I had never seen him so alive, which made me realize that I could do the same.

I stood there in silent contemplation, reflecting on the courage it had taken for my friend to take his life in his own hands and make a change. I knew I wanted to feel the same way and take my own leap of faith.

As the sun set and the sky grew darker, I took my first step towards a new beginning.

I showed this story to several friends. Some were horrified. There were comments like, “Wow. Not very good at disguising its intentions yet, is it?”

Some viewed the story through a spiritual lens and interpreted the leap as a death of the egoic self.

One friend noted that context was lacking and decided that the two characters had taken up an exciting new career as pearl divers.

The chatbot didn’t have any intentions, of course. It wasn’t trying to be spiritual, and it wasn’t trying to talk me into killing myself. It simply assembled a sequence of sentences in response to the prompt I gave it, and according to data it had been fed and patterns it had been trained to replicate. It likely hadn’t been informed that taking a leap of faith is a figure of speech, and so it had no reason to do what I expected, which was to write about a metaphorical leap like the career change the character seemed to be alluding to.

Another friend pointed out that since the chatbot is accessing the internet to compose its answers, and the internet is basically our collective consciousness these days, anything sinister in the story was nobody’s fault but our own. “It’s just reflecting back what we’ve given it,” she said.

That’s a somber thought, but it’s not the whole picture.

I asked Dr. Meltem Ballan, a technology executive who is passionate about making AI equitable and accessible for all, what are the biggest challenges that she sees. She demystified some things for me.

It turns out that AI tools aren’t simply fed a bunch of data and then left to their own devices. They have to be told what to do with it. And if you’ve ever tried to tell a computer what to do, you know that your instructions have to be very precise. Like any other computer, all your local chatbot really knows are zeros and ones.

So, feeding data to the machines and then training those machines on how to use that data are two different processes, and the results depend very much on the perspectives of the people who are involved.

I asked Dr. Ballan what could be done to make AI more democratic, and she said that it’s a lot like government. If you want to be represented, get involved! Bots only know what their trainers know, so if all of the trainers represent one or two subsets of society, the bots aren’t going to know much. They aren’t going to be very useful to people who think, speak, and relate differently from the trainers.

How to get involved?

It turns out that just about anyone with a phone or computer and an internet connection can join the millions of people all over the world who are training AI tools every day. Clickworker is one platform that matches bots that need training with individuals who have a little free time and the inclination to make a few extra bucks. Appen, Swagbucks, and Microworkers are other popular platforms.

Most of the tasks involve things like watching a computer make a judgment call, for example watching it determine whether a face is smiling or frowning, and then telling it whether it chose correctly or not. You can also help train tools to recognize verbal input by recording yourself giving commands.

It’s important that the pool workers completing these tasks accurately represent society as a whole. If neurodivergent people are underrepresented, for example, the bots aren’t going to be very good at communicating with people who don’t use facial expressions and vocal inflections the same way neurotypical people do.

The way data is collected for AI, according to Dr. Ballan, could also be more democratic. An impediment to this is the fact that so many people are hesitant to release their personal data to companies that are a) exploiting it for profit and b) not very transparent about what they’re going to do with it. I would be happy to contribute my data anonymously for pure research because that’s science and I love it. I’m not so excited about having my information extracted and sold so companies can hound me with more ads. I’m not sure what would be a good solution aside from better regulation.

So, it turns out that intelligent machines do not have sinister intentions toward us humans. While this imaginative take on technological developments makes for great entertainment, it’s not based on reality. Instead of having intentions of their own, these tools are built based on input and feedback from millions of people all over the world. The more diverse the pool of people creating AI, the more accurately it will reflect our own collective perceptions about ourselves.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Laura B Fox | The Guru's Ghost
Laura B Fox | The Guru's Ghost

Written by Laura B Fox | The Guru's Ghost

Ghostwriter, book coach, and off-grid goat farmer. Author of The Soul-Driven Author's Nonfiction Book Planning Guide. MA in Social Ecology and Anthropology

No responses yet

Write a response