Between artificial intelligence and climate change

For these reasons, I asked James Mannica—who leads Google’s technology and community group and Google Research—for his thoughts on the promise and challenge of this technology.

„We have to be brave and responsible at the same time,” he said.

„The reason to be bold is that, in a variety of fields, artificial intelligence has the potential to help people with everyday tasks, tackle some of humanity’s greatest challenges, such as healthcare. And to achieve new scientific discoveries and inventions. Productivity improvements can lead to greater economic prosperity.”

Manika added that it will do so by „giving people everywhere access to the world’s body of knowledge: in their own language, in their preferred mode of communication, through text, voice, images or code” on smartphones, television, radio. or e-book. Many more can get better help and better answers to improve their lives.

However, Manika added that we also need to be responsible, citing several concerns. First, these tools must be fully aligned with humanity’s goals. Second, in the wrong hands, these tools can cause tremendous damage, whether we’re talking about misinformation, things that can be faked to perfection, or hacking. (The bad guys always adopt them first.)

Finally, „to a certain extent, engineering is ahead of science,” explained Manika. This means that even the so-called big linguistic models on which products like ChatGPT and Bard are based do not fully understand how they work or the full extent of their capabilities. We can design artificial intelligence systems with extraordinary capabilities, teach them a few examples of math, unusual language, or the interpretation of jokes, and then, with astonishing accuracy, begin to do many more things. Those pieces. In other words, we don’t yet fully know how good or bad these systems can be.

READ  Russia seeks to restore its arsenal with North Korean weapons

So we need some control, but it needs to be done carefully and iteratively. One size does not fit all.

Because? Well, if our biggest concern is China overtaking the US in AI, we need to speed up our AI innovation, not slow it down. If we really want to democratize artificial intelligence, we might want to open source it. However, open source can be used. What will the Islamic State group do with the code? So think about gun control. If we worry that AI systems will exacerbate discrimination, privacy violations and other divisive social harms, as social media does, we need regulations now.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *