A study reveals that people in charge of training artificial intelligence are handing over their jobs to AI

A study reveals that a large number of professionals responsible for practicing artificial intelligence (AI) are using the same AI to do their jobs.

Research highlights that large amounts of data are required to train AI systems so that computers can perform specific tasks accurately and reliably. To do this, some companies hire freelancers to complete tasks that are difficult to automate, such as solving CAPTCHAs, labeling data, and annotating text.

These tasks are given for training AI models. Workers are often underpaid and have to complete a large number of tasks in a short period of time.

A group of researchers from at the Swiss Federal Institute of Technology (EPFL). 44 people were hired at the temporary placement site Amazon Mechanical Turk, summarizing extracts from clinical research articles. They then analyzed the responses using a self-trained AI model, looking for signs of ChatGPT’s influence, such as various flaws in the choice of words used.

Additionally, they examined workers’ keystrokes to determine whether they copied and pasted their answers, which suggests they created their answers somewhere else.

The study found that 33 to 46% of workers used AI models like OpenAI’s ChatGPT to do their jobs.

According to Ilya Shumailov, a junior researcher in computer science at the University of Oxford, who was not involved in the project, said there was no simple solution to prevent the spread of errors from one model to another. The researcher explained Technical study: „The problem is that when you use synthetic data, you get errors like model misunderstandings and statistical errors. You have to make sure that their errors don’t follow the output of other models, and there’s no easy way to do that. ” .

READ  Voice, Diversity and Technology in the AI ​​Era

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *