Article
You may already have seen people on the Internet fussing about how AI, including powerful models like ChatGPT, struggles with simple tasks—such as counting how many R's are in the word "strawberry." If you ask, "How many R's in strawberry?" you'll likely be told there are two R's, even though there are actually three. This raises an important question: Why does this happen, and does it mean AI isn't as smart as it seems?
Although LLMs like ChatGPT, Gemini, and Claude are incredibly powerful, they still heavily rely on how you communicate with them. These models are designed to process and generate human-like text based on the input they receive. However, they don't "see" letters and words as we do. Instead, they break down text into smaller units called tokens, which represent parts of words or entire words depending on the context. For example, the word "strawberry" might be broken into several tokens rather than individual letters. Many people on hacker news have discussed about this problems.
This tokenization process can lead to errors in tasks that seem simple to us, like counting letters. When asked "How many R's in strawberry?", the model might generate an answer based on a token-level interpretation rather than a letter-by-letter analysis, resulting in the incorrect count of R's.
So, does this mean LLMs are unreliable? Not really. The key to getting better, more accurate responses from AI requires great skills in Prompt Engineering. This is the art and science of crafting prompts that guide AI to produce the desired results. Since LLMs can't inherently "understand" in the way humans do, they need clear and specific instructions to perform certain tasks accurately.
For instance, if you simply prompt "How many R's in strawberry?", the AI performs a basic analysis, which might lead to errors. To get the correct count, you need to be more explicit. A better approach would be to instruct the AI to spell out each letter in "strawberry" and then count the R's. For example, a prompt like "Spell out the word 'strawberry' letter by letter, and then count how many times the letter R appears" would likely yield the correct result.
For those looking to get even better results from AI, there's a more advanced technique: using the AI to generate a complex prompt that guides its own analysis. For example, instead of directly asking the AI to count the R's, you could ask it to generate a prompt that would correctly count the R's in a word. This meta-level approach leverages the AI's capabilities to refine its own instructions, leading to more accurate outcomes.
Here is a better explanation and comparision between different types of prompts.
Now, you can try those prompts in your own ChatGPT to see if it works. Here, we used this prompting technique and built a very simple tool on Licode that can count the total number of letter in the given words. It is all powered by LLMs.
04 July 2024
In the dynamic world of technology, starting an AI-enabled startup has never been more accessible. The power of AI or LLMs is revolutionizing industries, opening doors for innovative solutions and disruptive business models. If you've ever dreamt of launching your own AI startup, this is the perfect time.
Read More26 June 2024
Launching an AI startup in 2025 does't have to be complicated and capital intensive. From transforming innovative ideas into reality to establishing an online presence and developing awesome AI products, the evolving tech landscape offers numerous opportunities for building a thriving AI business online.
Read More14 June 2024
Pre launching your startup before with viral waitlist or presales page can be the most effective way to validate your idea in the market. Check out how you can do it.
Read More