“Move fast and break things.” It’s a common refrain in the tech sector and an almost religious mantra for the innovators, rebels, and disruptors working to permanently revolutionize the industry. The attitude – ask forgiveness, not permission – has been nearly ubiquitous for decades.
Until now. Faced with an onslaught of twenty-first century ethical dilemmas (What if a self-driving car hurts a person? What if an AI does?), global tech leaders have begun the messy process of reevaluating some of the basic moral principles and beliefs underlying the industry.
Locally, Harvard and MIT are pioneering the effort. This semester, for the first time, the two Massachusetts’ universities jointly offer a course on “the ethics and regulation of artificial intelligence,” the New York Times reports. Other universities across the nation, Stanford notably included, now have similar offerings.
“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” Laura Norén, of the Center for Data Science at New York University who began teaching a new data science ethics course this semester, said to the Times.
That tech companies might have moral responsibility is an idea that’s time has come. As big companies struggle to tackle an onslaught of challenges from fake news to cyber warfare, it’s clear that ethics can no longer exist on the fringes of tech decision making. Ethics must sit at the core.
While university courses alone certainly will not transform the industry overnight, their creation is an important step to acknowledging and addressing what may very well come to be seen as the greatest moral dilemma of the era.
The trajectory of technology is difficult to change. Hopefully, our philosophy towards it is not.
For information about the courses offered at Harvard, MIT, Stanford, and other U.S. universities, read the full piece, “Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It,” in the New York Times.