AI Could Lead to a Dystopian Future. Can We Ensure That Doesn’t Happen?
"On one side of the equation," says Sairah Ashman, "there is a very utopian vision of what artificial intelligence will bring us." Ashman is COO Global of Wolff Olins, a creative consulting firm. He recently gave a talk of reflection entitled Reproduction of God, a human being at the Collision conference in New Orleans, along with the creation of a related essay to spark debate.
After painting a quick picture of a utopian future filled with flying cars and self-healing pods, Ashman turns to a future we may fear.
"But on the other side of the equation, there is also a very dystopian view, which is that we will reach a stage where we will become redundant and where artificial intelligence and machines will take over the world.
How can we prevent a dystopian future?
The future does not happen to us, of course. Our future, especially as it relates to the advance of AI, depends on the decisions we make today in terms of expectations, ethics and regulations. The million dollar question, however, is whether we are adequately prepared to handle these questions. In addition, who should address these issues in terms of technological progress?
"Historically we could have looked at the government," says Ashman in Playing God, Being Human. "But they are somewhat disabled, they do not move at the same pace as technology, they are bureaucratic institutions, they are different all over the world, they have different agendas, so maybe they will not give us the answers.
Like many people who think about our future with AI, such as Elon Musk and Bill Gates, Ashman advocates a more nuanced view of how we think about the future. "I take a very optimistic view of what artificial intelligence could do for us in the future. We do not have to think of this as a black and white issue."
ething we all can agree upon or least express our input about? How can the folks developing AI ensure that it's not corruptable? If we are going put our trust increasingly in computer intelligence, what happens if and when it gets hacked? The divide between a utopian or dystopian future with advancing AI seems to rely on the thoughtfulness--or lack thereof--that we apply today. --- Want to connect? Reach out @TechEthicist and on Facebook. Exploring the ethical, legal, and emotional impact of social media & tech. Co-host of the upcoming show, Funny as Tech. “Technology is not our enemy. Technology is a useful servant, but it could also become a terrible master. Technology is a tool to be employed, not a purpose to employ us. How much of your humanness are you willing to surrender in order to tap into the convenience of those magical machines? The more we robitize the world, the less we govern ourselves.”-Gerd Leonhard, author of Technology vs. Humanity: The Coming Clash Between Man an Machine
I reached Sairah Ashman to delve a little more into the issues she raised in her talk, and get her impression as to the best steps forward. In particular, he was curious to know why he believes that governments are ill-equipped to meet the challenges of technological advancement. Ashman pointed to three aspects of why governments can not be the most appropriate:
THREE REASONS WHY GOVERNMENTS COULD NOT BE ABLE TO MANAGE THE QUESTIONS OF THE AI:
Governments have challenges quickly and time. The comparatively slow government process may not be able to handle the speed at which new devices are being created.
We live in a globalized society, how do we coordinate all the moving parts? Government structures are typically based on corridors, however, technological advances are advancing globally.
How high is a priority for governments? Advancing IA can not get the focus it deserves.
"It's interesting that a lot of technology companies, without blame for their own I would say, are in a position where the decisions they make in one part of the planet appear in other parts very easily without necessarily realizing it. Desired. "- Sairah Ashman
If governments are ill-prepared, will Silicon Valley save us?
"It is increasingly thought that the answer will not necessarily come out of the Silicon Valley," says Ashman, when asked about the likely path to follow, and mentions his conversations with those in Silicon Valley who are taking matters concerning ethics and The impact Very seriously. In recent weeks, we have seen the launch of the OpenAI initiative along with Satya Nadella from Microsoft openly talking about preventing a dystopian future.
"The great thing about these technology companies and the fact that they are commercial," says Ashman, "is that they have to be very receptive."
Out of the responsiveness that tech companies may have to show from a pure business point of view, Ashman points out the power of the community. If we can not fully rely on our government systems to think pensively about the future of AI, there are three questions we should be asking about any advanced technology:
Where does it come from?
Why is it produced?
How do we feel about that?
"I agree with Ashman's formulation of some of the big questions about artificial intelligence," says Don Heider, after seeing Ashman's Playing God, human being talk. Heider is the founder of the Center for Ethics and Digital Policy at Loyola University in Chicago.
"Large technology companies as they mature have reached a turning point where acceptance of the consequences they create should be a high priority." - Don Heider, Why Facebook Should Hire a Head of Ethics (Op-Ed for USA Today)
Heider has some additional questions that you would like That technology companies will think: Will AI companies be transparent in what they are working on and how AI is being deployed? Who gets to the AI program? (Because AI will inevitably reflect the values of those who do the programming.) These are values
Comments
Post a Comment