Artificial lives: How to use ChatGPT and still be a good person
At the same time, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings. Both debuted a few weeks ago.
BRIAN X CHEN
The past few weeks have felt like a honeymoon phase for our relationship with tools powered by artificial intelligence.
Many of us have prodded ChatGPT, a chatbot that can generate responses with startlingly natural language, with tasks like writing stories about our pets, composing business proposals and coding software programs.
At the same time, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings. Both debuted a few weeks ago.
Like smartphones and social networks when they first emerged, A.I. feels fun and exciting. Yet (and I’m sorry to be a buzzkill), as is always the case with new technology, there will be drawbacks, painful lessons and unintended consequences.
People experimenting with ChatGPT were quick to realize that they could use the tool to win coding contests. Teachers have already caught their students using the bot to plagiarize essays.
And some women who uploaded their photos to Lensa received back renderings that felt sexualised and made them look skinnier, younger or even nude.
We have reached a turning point with artificial intelligence, and now is a good time to pause and assess: How can we use these tools ethically and safely?
For years, virtual assistants like Siri and Alexa, which also use A.I., were the butt of jokes because they weren’t particularly helpful. But modern A.I. is just good enough now that many people are seriously contemplating how to fit the tools into their daily lives and occupations.
“We’re at the beginning of a broader societal transformation,” said Brian Christian, a computer scientist and the author of “The Alignment Problem,” a book about the ethical concerns surrounding A.I systems.
“There’s going to be a bigger question here for businesses, but in the immediate term, for the education system, what is the future of homework?”
With careful thought and consideration, we can take advantage of the smarts of these tools without causing harm to ourselves or others. First, it’s important to understand how the technology works to know what exactly you’re doing with it.
ChatGPT is essentially a more powerful, fancier version of the predictive text system on our phones, which suggests words to complete a sentence when we are typing by using what it has learned from vast amounts of data scraped off the web.
It also can’t check if what it’s saying is true. If you use a chatbot to code a program, it looks at how the code was compiled in the past. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, Mr. Christian said.
Likewise, if you’re using ChatGPT to write an essay about a classic book, chances are that the bot will construct seemingly plausible arguments. But if others published a faulty analysis of the book on the web, that may also show up in your essay.
If your essay was then posted online, you would be contributing to the spread of misinformation. “They can fool us into thinking that they understand more than they do, and that can cause problems,” said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute.
In other words, the bot doesn’t think independently. It can’t even count.
Chen is a tech writer for NYT©2022
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android