Machines are fed mounds and mounds of data to extrapolate, interpret and learn. Unlike humans, algorithms are ill-equipped to consciously counteract learned biases because although we would like to believe AI/ML correlates to human thinking, it really doesn’t. AI/ML has created what we have determined to be the newest industrial revolution by giving computers the ability to interpret human language and without intention, it has learned human biases as well.So, where does the data being used by AI/ML systems come from? Most of this historical data comes from the same type of people who created the algorithms and the programs using the algorithms which until recently has been those socio-economically above average and male. So “without thinking” or intent, gender and racial biases have dominated the AI/ML learning process. An AI/ML system is not capable of “thinking on its feet” or reversing this bias once it makes a decision. The point is AI/ML systems are bias because humans are innately biased and AI/ML systems are not capable of moral decisions only humans are; at least not yet any way.