Using or Abusing Technology

There has been a lot of attention lately about what companies are doing with their technology and products. Specifically, there have been several instances of hi-tech employees revolting against their own companies selling systems that are used by ICE (Immigration and Customs Enforcement) because ICE has been separating families in detention centers. And this isn’t the first time technologists have expressed concern over HOW technology they create gets used.

Current attention is mostly on law enforcement agencies. But this sentiment has played a role in other situations, like GoDaddy’s shutdown of a neo-nazi website. Before that it was a site called “Rate my Policeman” and some pornography sites. As the world’s largest domain registrar, they are in a unique position to shut down objectionable websites.

The question then becomes: What is objectionable and potentially illicit?

The negative sentiment toward law enforcement by Amazon workers is one example where using facial recognition technology is being sold for what workers assert is “evil” and leading to a “big brother” society. Google and Microsoft employees also publicly shared they wanted their respective employers to stop selling tech to ICE and the Depart of Defense, which I’m sure are large contracts.

Satya Nadalla, Microsoft’s CEO, sent an internal memo to employees and then posted it publicly. In his memo he writes:

    “I want to be clear: Microsoft is not working with the U.S. government on any projects related to separating children from their families at the border.”

The Role of Machine Learning and Artificial Intelligence

The ability to see patterns in seeming unrelated data and finding actionable insights in said data has enabled some great things. For instance, advances in healthcare, supply chain, insurance, manufacturing, and other industries. Fewer people are dying of sepsis, for example, due to machine learning.

The positive stories of applying machine learning are truly remarkable, but the application of these technologies to perceived negative causes asks a very important question. Where does freedom of speech begin and end with a hosting or services provider? Sometimes this is covered in the EULA (End User License Agreement) customers don’t read when purchasing software and other services. So, it turns out companies do get to make the judgement call of who to serve with their products.

The next few years of software focus will be on AI and Machine Learning. Creating these technologies and applying them to the “right” problems is what many of the vocal tech employees are asking for. They don’t want their technology or companies to be used by others they perceive as morally unacceptable.

But who decides where acceptable and unacceptable application of technology begins and ends?

Now What?

These are some harsh realities to deal with. Here are some of my observations on the matter and an opinion or two.

  1. The militarization of our law enforcement agencies causes a lack of trust by the American people. (I’ll just leave this one right here)
  2. Workers have the strength and will to influence their company’s behavior by speaking out. This is in large part due to company culture.
  3. There is an ambiguous line between shutting down a website or cloud services for its message or usage, or leaving the customer to their own causes.
  4. Millennial voices have found ways to be heard, and will amplify their messages in the coming years.
  5. The applications of Machine Learning and AI to help make the world better are more prevalent then applications that make our society worse off.
  6. There will be an ironic application of Machine Learning and AI to root out customers tech companies don’t want to serve.

Who Decides?

With the vocal outcries from various places within the technical community, the federal government won’t leave issue alone for long. I foresee legislation on the application of these technologies. The problem will be that very few people, not to mention law makers, understand the tech and its application to problems that can truly heal and make for a better world. As a general collective, people don’t want to lose this opportunity to do good with the tech we have invented, and avoid its abuse.

An option for discussion would be a collaborative consortium of technology company representatives with government only on the fringes of the debate. But don’t hold your breath.

Social issues like this one, which don’t have a regulatory mechanism are ripe for government intervention. So, what do we want more? Protection from what zealots say? Our companies to regulate themselves? Or governmental involvement in this issue since to determine right and wrong?

Color me proud for the recent EULA changes companies that are pushing out regarding misuse of data. Color me proud for the voices that spoke up at risk of losing their jobs. Color me sad that we’ll likely see legislation on this issue in the next few years.

Proudly powered by WordPress | Theme: Code Blog by Crimson Themes.