MIT has found that Artificial Intelligence has Racial and Gender bias as implemented thus far by the big names in the field. was reported in a story reported on Axios.
We train AI by feeding it sample data. Our reality is riddled with bias and prejudice in our use of words throughout society. Our sample data relating to jurisprudence for example – is especially prejudiced. White offenders are regularly given lighter sentences than anyone of color.
One example: A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
We will need to groom more carefully what we provide AI to ensure that it is actually fair and balanced so that the assessments actually reflect the goals of society – constitutional legal equality in practice.
The Big Problem
There’s been much talk, but no action from large tech companies and other institutions using these algorithms, in terms of systematically investigating what biases are being reinforced and what remedies might be. – Axios article
The ACLU is concerned as we have a growing reliance of evaluation systems for judicial proceedings as well.
That it has been uncovered is great progress, and that the big players in the space are aware – is a step forward. Now we need to work as a society to realize that technology is neither evil nor good. But, technology is no better than the way we configure, train, and program it. It is incumbent on us as citizens to have a voice to not cower in fear. We need to have a voice, and participate in the direction these technologies that inform our lives progress.