Are all Chatbots Created Equal?
By Gregory Perrine - CEO
“Enhance with AI” is Popping Up Everywhere
As ChatGPT continues to become mainstream we are starting to see chatbots pop up everywhere. From Canva’s ability to draft copy, to Google’s Duet, and Microsoft’s Copilot, AI enhancement is this year’s new black. From an efficiency standpoint leveraging AI tools is a great way to streamline some workflows, but it begs the question, “Are all chatbots the same?”
What is a large language model chatbot?
Large language model chatbots, like chat GPT, use a process of predictive sequencing that in a nutshell, comes up with the next word in a sequence based on the predictive nature of a massive amount of text on file, and a set of standards and rules programmed into it.
(Okay, that’s great, now say it in a way I can understand.)
Chatbots mimic human speech because they’ve been fed a large majority of websites on the internet and use patterns based on the last word to predict the next word said.
Are there different chatbots or are everything just variations of Open AI?
Since OpenAI (the creator of Chat GPT) allows for an open API connection, a large majority of what we are seeing hit the market is just utilizing the OpenAI large language model to power the AI components in their platform. This is true for Canva, Microsoft products, Monday.com, Zendesk…the list goes on. However, there are other large language models on the market like Google’s Bard. While Google currently seems to be a bit behind the rollout of everyone jumping onto the OpenAI bandwagon, this could prove to be the better decision as regulations are navigated ultimately, and inevitably court cases are filed.
How to best train your AI chatbot
No matter the large language source there are some tools and techniques to get the best results out of your chatbot. For a lot of platforms, some predefined rules have already been defined. Take the tool LivePlan (powered by OpenAI) - any AI writing prompts submitted in the platform are programmed to respond from the perspective of a startup-minded investor perspective. With some framing prompts, the response from your chatbot can yield better results. Don’t take it from me, though - let’s go to the source.
Here’s a conversation I just had with ChatGPT4:
GP: “Can you share how to best engineer a prompt for ChatGPT?”
ChatGPT: “Certainly! Crafting an effective prompt for ChatGPT (or any AI language model) can make a significant difference in the quality and relevance of the response you receive. Here are some guidelines and tips to consider:
Be Specific: The more precise your prompt, the more accurate and relevant the answer will likely be. If you have a particular context or set of constraints, include them. For example, instead of asking "Tell me about cats," you could ask "Can you explain the differences between domestic cats and wild cats in terms of behavior?"
Open vs. Closed Questions: Consider whether you want a broad overview (open-ended question) or a specific answer (closed-ended question). For instance, "Tell me about the history of computers" vs. "Who invented the first digital computer?"
Set the Format: If you have a preference for how the information should be presented (e.g., list, paragraph, bullet points), specify it in the prompt.
Specify Depth: If you want a detailed answer or just a brief overview, make it clear. E.g., "Give me a brief summary of photosynthesis" vs. "Explain the detailed process of photosynthesis."
Ask Follow-up Questions: Sometimes, the best way to get the information you want is by asking follow-up questions. This can help in refining or expanding upon an initial response.
Avoid Ambiguity: If a prompt can be interpreted in multiple ways, the model might not provide the answer you're hoping for. If in doubt, provide additional context or clarify your question.
Set the Tone: If you want a response in a particular tone (e.g., formal, casual, humorous), you can specify this in the prompt or even model the tone in your question.
Explicit Constraints: Sometimes, you might want to constrain the model's output. For example, "In 100 words or less, explain the principle of relativity."
Use Iterative Refinement: If the answer isn’t quite right the first time, you can rephrase your question, provide more details, or ask for clarification. Think of it as a conversation, where you iteratively narrow down to the information you want.
Sample Multiple Times: Especially for creative tasks, you can prompt the model multiple times to get a variety of responses and then choose the one you like best.Direct the Model: If you're looking for opinions or different viewpoints, you can instruct the model accordingly. E.g., "Argue in favor of renewable energy sources" vs. "Outline the challenges associated with renewable energy sources."
Safety and Filtering: Be aware that the model can sometimes produce outputs that might be unexpected, inappropriate, or sensitive. It's a good practice to filter or review outputs, especially if they're intended for a public audience or if the domain of the question touches on sensitive topics.
Remember, while these guidelines can help in getting better responses, they are not strict rules. The best approach often involves experimentation and refining your prompts based on the results you get.”
The Future of AI Chatbots
We are still early on in our journey of seeing AI’s overall impact on the world we know, but just as Microsoft is to Apple (or Google) there is a space in the market for another large language model, that will tout features that ChatGPT doesn’t have. If you haven’t done so already, do a quick search for the source of where you are getting your AI from if it’s not directly from ChatGPT, and continue to watch this journey evolve.
Help Someone. Have Fun, Be You. Learn & Teach. Finish What You Start.