; Skip To Content
ADVERTISEMENT

ChatGPT and Beyond

We need to find the best ways to leverage AI technologies for science and human progress, while ensuring that they do not become destructive.

Michal Lipson

Artificial intelligence has been jokingly defined as “the art of making computers behave like the ones in movies.” Under that definition, ChatGPT, the natural-language processing tool from OpenAI that launched on 30 November 2022, might be AI’s greatest success story yet. Though it debuted only six months ago, the chatbot has already achieved a sort of movie-star celebrity. It has been interviewed by journalists, shown off skills ranging from writing software code to composing poetry, and achieved a level of easy dialogue with humans that calls to mind the helpful, all-purpose computer of Star Trek.

It’s easy to see how generative AI models such as ChatGPT, and AI more broadly, could be a force for progress in science and society. Deep learning and other AI techniques are already revolutionizing how science and engineering are done, in tasks ranging from image recognition and analysis to the design of optical networks. On a more mundane note, anyone who has sweated through the difficult task of writing up their research results might welcome a helping hand from a tool like ChatGPT. Such assistance, it’s been noted, could even level the playing field for publication of research by authors whose first language is not English. That would indisputably be a positive outcome for the world.

Yet, as with any emerging technology, recent advances in AI have also created fears and new dilemmas. Even in the narrow area of scientific communication, our community is wrestling with some practical and ethical implications of ChatGPT and its cousins. These include such questions as how to disclose an AI system’s contribution to a paper’s findings, text and images; whether an AI tool can be listed as an author; how to police potential fraud and abuses; and whether such policing is even possible. Further, some have suggested, these technologies raise larger concerns about rampant misinformation—embodied in such AI-driven vehicles as “deepfake” videos—that could erode trust and divide society.

I believe we need to take these concerns seriously. But we also need to recognize that we have been here before. All transformational technologies are potentially double-edged, and their impact depends on what humans choose to do with them. The global internet, born 40 years ago, has since then completely reshaped how we communicate, obtain information and drive forward human progress. There’s no question that, in some cases, it has been a powerful tool for spreading misinformation and enabling political repression. But vastly more of the changes brought by the internet have been for the good, and few would want to return to a pre-internet age—even if such a thing were possible.

AI technologies, some now believe, have the potential for similar, internet-scale (or greater) disruption. It would be pointless to try and wish that disruption away. Instead, we need to find the best ways to leverage these technologies for science and human progress, while ensuring that they do not become destructive. Making that happen will require a different, very human kind of intelligence—involving an ongoing dialogue among scientists, governments, industry and society at large. It’s a challenge all of us need to embrace.

Michal Lipson,
Optica President


View FrenchChinese, German, Japanese and Spanish translations of this message:

Publish Date: 01 June 2023

Add a Comment

;