Since the advent of Artificial Intelligence (AI). And its rapid progress, experts highlight its significant benefits for work It warned about its dangers and potential consequences. In fact, this tool gives the uncertainty of how humanity can affect key people Signed a petition Stop early This technology for six months. Now he is Pentagon AI chief Craig Martelexpressed that is „terrible”. For use that people can give.
In November 2022, a tool appeared that would revolutionize the advancement of artificial intelligence: ChatGPT. Therefore, this technology allowed anyone with Internet access to create any type of text. It also presents disadvantages that deserve special attention: Depending on the record you get on a particular topic, Some information may be incorrect Or you may misinterpret the instructions, for example.
Thus, MartellWho is the director? The U.S. Department of Defense’s Digital and Artificial Intelligence has warned about the potential use people can give to this technology. „Don’t just sell us generation, work on discovery”, he pointed out in a recognizable way the reality within his conclusions. The expert mentions, for example, Avatar emulation or voice cloningor photo-like quality in AI-generated images.
„This information is stimulating A psychological response is to think that it has power„, he pointed out, According to the media Break protectionDuring the Cyber Conference Technet, organized by the Armed Forces Electronics and Communications Association. He promised that He is „scared to death.” For the use that users can give to AI resources.
Additional news that may interest you:
„The Perfect Tool for Misinformation”
Meanwhile, the Pentagon’s proposals for the Defense Department Infrastructure development and policies This allows the US military to realize JADC2 projectAs Martell explains, A platform that connects the systems of all branches Share the data this company and its technology collect.
„He is trained to be fluent and speaks fluently and with authority, so they believe him even if his information is wrong,” the expert said. He added: “It means It is the perfect tool for disinformation. And, really, we need tools that can detect when this is going to happen and warn us at that time… but we don’t have them. We are behind in that fight”.
Additionally, he highlighted: „If you ask ChatGPT: ’can i trust you’, his answer comes from a long 'no’. I’m not kidding. It claims to be a tool that is going to give you an answer, but it is You should go see it For you”. And he concluded: „So my fear about using this technology is that we will rely more on it without service providers, as opposed to our adversaries using it. Build the right safeguards and capacity to verify it”.