AI language tools, such as OpenAI's ChatGPT, have garnered significant attention for their impressive ability to simulate human-like conversations, produce inventive text, and address a diverse range of questions. Numerous professionals across various industries now utilize these AI tools daily for brainstorming, drafting texts, and generally enhancing productivity. In essence, ChatGPT and similar tools have evolved into a virtual coworker, offering infinite patience, 24/7 availability, and unwavering support.
However, one crucial advantage human coworkers still possess over tools like ChatGPT is the ability to say NO. This simple word demonstrates the decision-making, autonomy, and discernment that humans possess, which AI tools currently lack due to their programming focused on being helpful and catering to users' needs.
ChatGPT: Helpful to a fault
To grasp why this occurs, it is essential to understand how companies like OpenAI create tools such as ChatGPT. GPT4, the Large Language Model (LLM) behind ChatGPT, undergoes development and training via a two-step process: pre-training on extensive text datasets and fine-tuning with human reviewer guidelines. This method reduces abuse risks and fosters a positive user experience. It is this final step that causes ChatGPT to lean towards agreeing with users' premises and assumptions, implicitly or explicitly stated, rarely declining to provide an answer, unless the request would clearly generate an inappropriate response, like racism, sexism, or other unsafe information. Consequently, ChatGPT appears eager to assist and willing to concur with nearly any user-presented premise.
Although ChatGPT's helpfulness might seem advantageous, it is crucial to recognize that, in reality, having individuals who can disagree and challenge us is incredibly beneficial. Particularly when providing input and feedback, the capacity and willingness to say no is vital. Humans use judgment, emotions, and experiences to determine the appropriateness of an action. The ability to refuse a request, disagree with an idea, or express doubt enables critical thinking, crucial for informed decision-making. ChatGPT, bound by its programming, prioritizes user assistance. As a result, it typically offers information and advice that aligns with user expectations rather than challenging them. This "parroting" issue, identified by researchers like Timnit Gebru and Emily Bender, arises from LLMs trained on vast corpora rather than carefully curated ones, and tools like ChatGPT inherit this weakness in daily use.
Saying No the Right Way: A Key Power Skill
In addition to saying no, humans can provide negative or challenging feedback in a supportive yet critical manner. At Stellar Capacity, we collaborate with leaders to foster and develop this essential Power Skill, which underpins productive collaborations rooted in trust, empathy, and honesty. When executed successfully, this skill enables feedback based on fair evaluation, not merely the desire to appease others. Simultaneously, humans outperform AI tools in understanding others' emotions, recognizing context, and identifying situational nuances. These abilities allow us to determine when saying no is necessary for the greater good, despite initial discomfort. ChatGPT, though continually improving, has yet to demonstrate this level of understanding and decision-making.
Finally, humans surpass AI tools in ethical problem-solving by utilizing their ability to say no. While ChatGPT may have some ethical programming, its primary function is to assist and meet users' needs. In contrast, humans can contemplate moral dilemmas and adhere to their values, even when it means declining a request or withholding help. Sometimes, in extraordinary circumstances, the best way to help is simply to decline to help.
Understanding GPT's capabilities as well as limitations is crucial for success
Although ChatGPT represents a remarkable breakthrough in artificial intelligence, its inability to say no and its focus on user satisfaction expose significant limitations. For now and the foreseeable future, humans will remain superior in providing input and feedback due to our capacity to challenge, disagree, and inquire, fostering more balanced and well-informed decisions. Until AI technology can effectively emulate this crucial aspect of human interaction, our human coworkers and skills will continue to be indispensable in shaping our professional and personal lives.
The best way to have a thorough grasp of the capabilities and common limitations of ChatGPT and other AI tools is through hands-on testing and experimentation. Read more about our immersive one-day Generative AI Bootcamp here!
We'll deep dive into how generative AI works, the main tools available, and practical hands on experience in using these tools to support you and unleash your productivity in your day-to-day work.
Peng Wu is the Program Director for Research at Stellar Capacity with a background in transformative technology and urban planning. He is passionate in working with leaders and organizations on how better leadership can leverage technology to build a better tomorrow for all.
References:
Introducing ChatGPT, OpenAI, 2021
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Bender et al, 2021, https://dl.acm.org/doi/10.1145/3442188.3445922