week 6 reply
STUDENT 1
Justin Corduan
Artificial Intelligence as it grows and spreads into our day to day lives public or private it is shown that even it can be bias. AI systems has been proven to be less effective when identifying dark skinned women, to even choose women for lower credit card limits than their husbands, even be more likely to incorrectly for see black defendants to commit future crimes over whites. That being said that just show racial and gender bias for a few things.
For one thing facial recognition being bias is based on the data it is trained on. IE mass amounts of photos to learn what to look for. If that data mainly whites vs blacks it will have a more difficult time recognizing blacks in any aspects passport photos, reentering the country anything in that aspect.
Can this be fixed?
Obviously this is not something people want and researchers are constantly working on new ideas and new things to try and weed out AI bias. Large tech companies have utilized tools to identify and remove bias as part of their AI offerings. This is going to cause very big obstacles to overcome. But they are working on these types of things to make the AIs less bias.
Advantages that AIs bring to a system worth the bias?
AI's can be ultilized to identify a certain gender like men only for instance. Or a certain race of people. This could be good for the group which has been overrepresented since it shows predictions to a certain gender leaving the other to the side. Obviously AI will always be around and it will continue to grow and continue to learn with new data daily and that will ultimately help correct AI bias.
STUDENT 2
Tyler
Hello Everyone,
According to Shin, Amazons hiring algorithm is one example of AI Bias. In 2015, Amazon realized that the algorithm used for hiring was found to be biased against women. When scanning the resumes of people for hire, it chose men more than it did women due to the majority of applicants being men. (Shin, 2020) This caused it to be trained to favor men over women. Another example by Shin, is COMPAS, Which is an algorithm that is used for sentencing by predicting the likelihood of a criminal reoffending. According to an analysis, it showed that black defendants posed a higher risk of reoffending than white defendants. (Shin, 2020)
In order to build fairness into the AI systems you will need to have awareness in building. There should also be a governance structure that will detect and mitigate data collection. When it comes to advantages over bias, I feel that race or sex should never be an issue that is over looked with AI. This causes people to not have the proper healthcare or makes them vulnerable to the judicial system. It also doesn't give people fair chances at obtaining a job. Though with AI having biases, this can force us to have to deal with it and make change.
References
Shin, T. (2020, June 4). Real-life examples of discriminating artificial intelligence. Medium. Retrieved March 23, 2022, from https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070