Prejudices and biases in the results of Artificial Intelligence

Prejudices and biases in the results of Artificial Intelligence

Prejudices and biases in the results of Artificial Intelligence

So far this year, we have made some posts related to Artificial Intelligence, its relationship with Free Software and Open Source, and how we can use them on our free and open operating systems based on GNU/Linux. Therefore, today we will address another interesting topic about them.

And this is, about the possibility of obtain «prejudices and biases» in the results of Artificial Intelligence. Since, although AI are usually very useful, thanks to the fact that they can produce very precise results, these can contain human ideological biases, if the necessary precautions or measures are not taken.

Merlin and Translaite: 2 tools to use ChatGPT on Linux

Merlin and Translaite: 2 tools to use ChatGPT on Linux

But, before starting this post about the possibility of get “bias and bias” in AI results, we recommend that you then explore the previous related post with the same:

Merlin and Translaite: 2 tools to use ChatGPT on Linux
Related article:
Merlin and Translaite: 2 tools to use ChatGPT on Linux

Prejudices and biases: can they occur in the results of AI?

Prejudices and biases: can they occur in the results of AI?

On prejudices and biases in the results of AI

Personally, lately I have tried and recommended a few artificial intelligence tools, which very surely many are based on the use of the ChatBot called OpenAI ChatGPT. And I have not had any major problems with erroneous, inaccurate, false, or inappropriate or offensive results. However, in the short time these have been on the air, many of us have surely read about unpleasant and even unacceptable situations, about the results generated by them.

For example, a recent case of erroneous or inaccurate results It was the recent one from Google's ChatBot Bard. While, an old case of unpleasant or offensive results It was, when Microsoft launched Tay, an artificial intelligence chatbot, on the social networking platform Twitter, which, after 16 hours of operation, the chatbot published thousands of tweets, which in the end became openly racist, misogynistic and anti-Semitic.

However, I have noticed on the Internet, not friendly or pleasant results, especially when images are generated about groups of people or specific people. Therefore, I think that prejudice and human biases can also be present in AI software. And that perhaps this can happen when the data used to train the AI ​​software is biased, or when the software is designed with a particular set of values ​​or beliefs of certain groups in mind.

Since, many times, in its various stages of development, They are usually designed using majority data from some demographic group or prevailing over others, or with parameters to avoid affecting power groups or favoring important groups of society.

Possible measures to avoid it

Possible measures to avoid it

To avoid prejudice and bias in AI software, must always be taken by its developers, measures such as:

  1. Ensure that the data used to train the AI ​​software is representative of the population for which it will be used, and that the values ​​or beliefs embedded in the software are fit for purpose.
  2. Implement diversity, inclusion and fairness (DIF) measures to help reduce bias in AI software. In such a way that it does not discriminate against certain people or groups.

While, AI users should have as a fundamental rule:

  1. Exercising caution when making decisions based on AI software, or when crafting jobs, goods, products, and services with its results.
  2. Always, they should take into account the potential for prejudice and bias of AI, and for errors and inaccuracies in the data offered, before making a decision about its use.
ChatGPT on Linux: Desktop Clients and Web Browsers
Related article:
ChatGPT on Linux: Desktop Clients and Web Browsers

Abstract banner for post

Summary

In summary, organizations and individuals should strive to inform themselves about the potential for “bias and bias” of AI software and how to avoid it. And the developers do everything they can to avoid it, in order to ensure that the AI software is used responsibly and fairly.

Also, remember, visit the beginning of our «site», in addition to the official channel of Telegram for more news, tutorials and Linux updates. West group, for more information on today's topic.


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: Miguel Ángel Gatón
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.