A computer calculating a likelihood score of you committing a future crime sounds dystopian, but it's already used in U.S. courts by judges. An algorithm called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), has already been used in states like New York and California to predict recidivism (the tendency of a convicted criminal to reoffend).
A 2016 investigation by ProPublica showed that the algorithm was biased – blacks were twice as likely to be labeled "high risk" but not actually reoffend. Worse yet, the algorithm made the opposite mistake among whites. The article itself is a great read.
Protecting against algorithmic bias will only increase in importance as we rely on algorithms to help us make more decisions. Bias can show up in our models, our data, and our analysis.
Proprietary algorithms like COMPAS are difficult to audit – not only is the code not public, but the models may not be explainable either. Open source may have an interesting role to play here. But the issue of algorithmic fairness is something we should all be thinking about.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.