Author: Brian Leefeldt

  • Rapid technological advances raise numerous ethical issues

    Rapid technological advances raise numerous ethical issues

    Editor’s Note: This article is part of a larger series of Q&As that originated in the future-focused UD Magazine. Additional topics discussed by College of Arts and Sciences faculty members and alumni include self-driving vehicles, dystopian views of the future in pop culture, and the ability to distinguish between real and fake news. To see these and other views of the future, please visit the Envisioning the Future website.

    Industries currently “throw technologies over the wall” and wait for the consequences to accumulate. The ethical questions are changing constantly, as technologies change constantly, but clearly new products will continue to impact our lives. We can ask: Does this product make us safer? Does it threaten privacy? Will society on the whole be benefited or harmed?

    Perhaps it’s not so much the questions we ask, but the people and groups who bother to ask them. A promising, multidisciplinary conversation has arisen at the intersection of engineering and philosophy known as “ethics of design.” Here, teams of experts try to anticipate problems associated with a new technology so they can design around them — or design to avoid them. Of course, we might also expect consumers to care about the ethics of new technologies, along with lawyers, insurers, health-care providers and others.

    For instance, in the last few years I’ve spoken at two international conferences focused on driverless cars. Law professors, Artificial Intelligence experts, transportation planners, sociologists, philosophers and others from academia and industry discussed everything from “Can we program ethics into an automated vehicle?” to “How do we insure a vehicle when there’s not a person at the wheel?” We must have these important conversations before large-scale technological changes take hold.

    Article by Tom Powers, associate professor of philosophy and director of UD’s Center for Science, Ethics and Public Policy; illustration by Kailey Whitman

  • ETHICS ON AUTOPILOT

    ETHICS ON AUTOPILOT

    UD’s Powers helps industry, policymakers consider values in autonomous-vehicle debates

    It sounds great to many people: Jump in a car, push a few buttons and – voila! – it takes you wherever you tell it to go while you sit back, catch up on the news, play a game or take a nap.

    But it turns out that self-driving vehicles have a much more complicated side than that push-button scenario might suggest.

    And while a state-of-the-art global positioning system (GPS) will certainly be standard technology on such vehicles, University of Delaware philosopher and ethicist Tom Powers wants something else in the mix: A moral compass.

    That’s a tall order that lags far behind the fast-forward technology. But it’s mission critical. The way vehicles are programmed to reach a destination and respond to changing conditions may have life-and-death implications.

    “The vehicles themselves will be agents,” Powers, director of UD’s Center for Science, Ethics and Public Policy, said during a recent UD Scholar in the Library seminar. “We want to purposely think about that. If they are agents in a moral sense, with assumption of decision-making capability, it is no longer human beings but mechanisms that will be able to make decisions that we consider morally loaded.”

    Such questions – and many others – are drawing new audiences to Powers’ lectures. In addition to philosophers and ethicists and other researchers, Powers is talking to transportation officials, urban planners, insurance company representatives, lawyers and automakers.

    Some automakers – Volvo, for example – already are moving full speed ahead to develop and introduce such vehicles. On its website, Volvo says it believes its first unsupervised autonomous vehicles will reach the market by 2021.

    Ford Motor is pointing to a 2021 date, too, and recently announced it had reached a $1 billion deal for a robotics startup as it continues development of a “virtual driver” system.

    Steve Dellenback, executive director in the Automation and Data Systems Division at Southwest Research Institute in San Antonio, Texas, told an audience at the 2016 National Association of Science Writers he doubts such vehicles will be a routine presence on public roadways anytime soon.

    Dellenback, who has been working with autonomous vehicle technology for more than a decade, said some farms are using driverless vehicles and the military is exploring possibilities. But there are many challenges, including cybersecurity, the impact of weather conditions and how to manage unmapped areas and unconventional terrain.

    The moral dilemma

    Powers wants all concerned to be thinking about what these vehicles will mean for safety, freedom, equity and sustainability.

    Automation is not a new phenomenon, of course. Automatic transmissions, anti-lock brakes and cruise control are examples of functions that have morphed from all-manual to increasing degrees of automation.

    “What we’re talking about now is a degree of automation to the point where human beings aren’t doing anything at all,” Powers said. “… There is a moral dilemma that must be taken into account when we design these cars.”

    What is the dilemma? Philosophers and ethicists have long debated the “trolley problem,” which asks what you ought to do if you were at the controls of a track switch and saw a runaway trolley heading toward two unavoidably fatal scenarios. Do nothing and the trolley kills five people unable to escape on the tracks ahead. Pull the switch and the trolley kills one person on the side track. That seems to be the (mathematically) humane solution, but that person wouldn’t have died without your intervention. And what if that one person is your own child?

    Crash-avoidance technology is now included in some new vehicles, alerting drivers if they drift into another lane or are headed toward an object. But evasive maneuvers work best if all vehicles in the proximity have similar capacity. And crash-avoidance algorithms will have to face something like the “trolley problem” in at least some cases of evasion.

    Many variables can be addressed in programming, but how are the values of specific options calculated? And who contributes to those decisions?

    In the January edition of Prism, a monthly publication of the American Society for Engineering Education, Aditya Johri of George Mason University asks what role engineers, designers and consumers should play.

    “Machines can learn from their users, change their functionality, and in turn change how users respond,” Johri writes. “… Now that actions are programmable, should it be the job of the engineers to do so? Should designers be made to test and use their inventions before unleashing them onto the public? Should users be involved more in the design?”

    If vehicles are programmed to follow the rules of the road and never cross a double yellow line, for example, what happens if there is an obstruction or a perilous situation ahead that the vehicle cannot get around otherwise?

    And how could this programming be used for marketing purposes? What could happen, for example, if an automaker promises consumers that its vehicle will protect itself over all other options? Something like: “Your family, above all else.” Could that mean the vehicle opts to drive over seven people to avoid a fender-bender?

    And who is responsible for that decision? Who will stand before the judge? The programmer? How many were involved in the design of that software and what roles did they play?

    “There is moral complexity in these crash decisions,” Powers said.

    Autonomous vehicles will require restructuring of highway systems and accommodation of bicycles and pedestrians. And UD researchers are consulting with the state Department of Transportation on those changes already.

    It’s important to think about these things sooner than later, Powers said.

    “What values can we support or institute through information technology and what values might be left behind?”

    The answer, he said, may be waiting for us on the highways.