Which Section Of This Blog Post Was Written By An AI Chatbot?

Author avatarBruce Weinstein ·Feb 10, 2023

Artificial intelligence can enrich our lives or present trouble. Consider, for example, how AI can deepen our understanding of ethics, the study of conduct and character.

I asked a popular new AI chatbot, ChatGPT, “In 200 words or less, what are three things that get in the way of doing the right thing?” I also answered the question myself. Here are the results. See if you can guess which one the AI program wrote.

Three things that get in the way of doing the right thing: response # 1

There are many factors that can get in the way of doing the right thing, but here are three common ones:

  1. Self-interest: Sometimes, we may be faced with a choice between doing what is right or doing what is in our own best interest. This can be especially true when the right thing is inconvenient or involves some sort of sacrifice on our part.
  2. Fear: Fear can be a powerful motivator, and it can sometimes prevent us from doing the right thing. We may be afraid of the consequences of taking a stand or speaking out, or we may be afraid of failure or rejection.
  3. Lack of understanding or empathy: It can be difficult to do the right thing if we don't understand the perspective or needs of others. Without empathy, it can be hard to see beyond our own narrow self-interest and consider the greater good.

Three things that get in the way of doing the right thing: response #2

Three things that can get in the way of doing the right thing are:

  1. Fear. Suppose you’re in a public place and overhear two colleagues discussing confidential information about a patient. You’d like to encourage them not to do this but worry they’ll get angry at you. Fear can prevent you from doing the right thing, which here would mean speaking with your colleagues privately about your concerns.
  2. Foul mood. Sometimes we wake up in a bad mood. Maybe we didn’t get enough sleep or we’re having relationship problems. Whatever the cause, being in a bad mood can make it harder to treat others with respect, which is a crucial ethical obligation.
  3. Focus on short-term benefits. Placing great value on maximizing profits or stakeholder interests can compromise the duty to be honest. An example of this is the cross-selling scandal at Wells Fargo, in which employees were pressured to create millions of fraudulent checking accounts and credit cards without their clients’ knowledge or consent.

The answer

Both responses make sense. Each one is a concise answer to the question, “What gets in the way of doing the right thing?” Each contains relatable examples. Each obeys the basic rules of grammar and spelling. But only one was written by a human being. The AI chatbox wrote the first response, and I wrote the second.

If both responses are reasonable, what’s the problem?

Here are three ethical problems raised by this experiment.

Citation problem #1: Who wrote what?

Suppose you’re preparing a presentation on new strategies in sales for your team. You search for recent articles on the topic and find one that looks perfect. Let’s also suppose that the first section of the article—the one you’d like to quote—was written by an AI bot but that the article’s author, Arthur Sittenwidrig, doesn’t mention this fact. By passing off another’s idea as his own, Sittenwidrig violates a fundamental principle of ethical intelligence: Be honest.

You know that it’s good practice to cite the articles you quote from, but if you say, “According to Arthur Sittenwidrig in an article published in Sales Success Strategies magazine online....”, you are unwittingly contributing to his ruse.

Citation problem #2: Where did the AI-generated material come from?

“There is nothing new under the sun,” says a passage in Ecclesiastes 1:9, and that gives rise to another problem with citing AI bots in a presentation or article.

If Arthur Sittenwidrig did mention that a bot wrote the section you want to quote, how would you cite this? “According to what a bot wrote in Arthur Sittenwidrig’s article on new sales strategies....” Anyone paying attention to your talk would think, “What? How is that a legitimate source of knowledge? Where did the bot come up with that?”

The material an AI bot generates in response to a question you pose synthesizes existing writing. The work of one person or more played a role in the above section on what gets in the way of doing the right thing. Yet the bot I used doesn’t cite any of these sources.

Citing material that an AI bot writes without attribution fails to give credit where it is due. As such, it violates another principle of ethical intelligence, Be fair.

Problem #3: using bots to write articles, chapters, or books for you

In their recent article in The New York Times, Claire Cain Miller, Adam Playford, Larry Buchanan and Aaron Krolik describe another problem raised by AI chatbots. A fourth-grade teacher, a professional writing tutor, a Stanford education professor, and bestselling author Judy Blume could not tell whether a human child or ChatGPT wrote three essays.

Adding to their already overstressed lives, teachers must now determine whether students are using their intellectual skills to write essays or having bots do it for them.

It’s not just teachers who will be burdened with this. Editors at publishing houses will now be at risk of accepting manuscripts that an AI program has substantially or wholly written.

Finally, readers worldwide are about to be inundated by writing that has no direct human imprint, presented by unscrupulous writers who don’t bother identifying their sources. It’s not hard to imagine struggling writers asking themselves, “Why should I endure all of the work involved in writing an article or book when I can have AI do it for me?”

We use AI-assisted platforms to check our spelling and grammar, so what’s the big deal here?

There is an ethically relevant difference between using AI to ensure our spelling and grammar are correct and using it to create substantial written work.

The former uses AI to make our original thoughts more straightforward. The second uses AI to compensate for laziness.

Worse, when bots are not identified as sources for entire swaths of writing, the result is fraudulent. It may be legal to do this (for now), but it is dishonest.

The takeaway

Like all forms of technology, artificial intelligence is ethically neutral. It’s merely an instrument. It can be put to good use, prompting us to consider aspects of a problem we might not have thought of.

Artificial intelligence can also be used for ethically troubling purposes, such as writing sections of articles or entire books for us, which we then claim as our own.

How willing are we to use artificial intelligence with ethical intelligence? We can’t use an AI chatbot to guide us. The answer depends on our tolerance for even more dishonesty and unfairness in the world than there already is.

---

Get 30% OFF!

Use the coupon code ETHICSROCKS to get 30% OFF on all of my ethics courses on CPD Formula, such as How Practising Gratitude Enriches Your CPA Practice--and You. If you want to enrich your business through the practice of gratitude, why not enroll now?

Access this course on: CPD Formula (click here)  

The code will work on all of my other ethics courses, which you may see here (scroll down past my bio).

Rock on!

Bruce Weinstein, Ph.D.
The Ethics Guy®
Forbes Contributor

P.S. Yes, I'll speak to your group. Please contact me via TheEthicsGuy.com to discuss.


Tags:
ARTIFICIAL INTELLEGENCE
ETHICS
CPA ETHICS
(HR) General
BUSINESS COMMUNICATION
Image subscription

Never miss a post.

We'll keep you in the loop with everything good going on in the modern professional development world.

By submitting this newsletter request, I consent to LearnFormula sending me marketing communication via email. I may opt out at any time. View LearnFormula's privacy policy