View Single Post
Old 11-08-2022, 11:20   #10
Hugh
laeva recumbens anguis
Cable Forum Team
 
Hugh's Avatar
 
Join Date: Jun 2006
Age: 67
Services: Premiere Collection
Posts: 42,099
Hugh has a golden auraHugh has a golden aura
Hugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden auraHugh has a golden aura
Re: Meta's chatbot says the company 'exploits people'

Quote:
Originally Posted by nomadking View Post
It's not really an AI bot. It just gathers opinions from those who have said something. It only can draw conclusions from those who say the exactly the same thing. Those that have opinions. but are silent or are silenced on matters, won't be represented.
AI bots deduce things from hard, cold facts. and do it without fear or favour, and without having been brainwashed. That is why certain groups hate them.
Yeh, about that...

https://www.technologyreview.com/201...hoCGdcQAvD_BwE

Quote:
Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.
Quote:
Unknown unknowns. The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it. In Amazon’s case, when the engineers initially discovered that its tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” They soon discovered that the revised system was still picking up on implicitly gendered words—verbs that were highly correlated with men over women, such as “executed” and “captured”—and using that to make its decisions.

Imperfect processes. First, many of the standard practices in deep learning are not designed with bias detection in mind. Deep-learning models are tested for performance before they are deployed, creating what would seem to be a perfect opportunity for catching bias. But in practice, testing usually looks like this: computer scientists randomly split their data before training into one group that’s actually used for training and another that’s reserved for validation once training is done. That means the data you use to test the performance of your model has the same biases as the data you used to train it. Thus, it will fail to flag skewed or prejudiced results.
https://www.nist.gov/news-events/new...ethnic%20group.

Quote:
It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group.
__________________
There is always light.
If only we’re brave enough to see it.
If only we’re brave enough to be it
.
If my post is in bold and this colour, it's a Moderator Request.

Last edited by Hugh; 11-08-2022 at 11:38.
Hugh is offline   Reply With Quote