How to Choose Ideas for an LLM-powered Product to Thrive in a Fiercely Competitive Landscape
- Rifx.Online
- Generative AI , Product Development , Technology/Web
- 10 Nov, 2024
Welcome to the third (final) piece in my series exploring the question: “Which GenAI products are worth developing?”
- The first article explored this question from the perspectives of user experience (UX) and product adoption.
- The second article, which I highly recommend reading before this one, included six examples of successful and unsuccessful product ideas, as well as my GenAI Squared strategy:
3. This third piece continues to focus on ways to navigate the competitive landscape, as well as to optimize development costs without losing competitive advantages. While this article contains fewer examples than its predecessor, the factors discussed here are crucial for success in the GenAI product space.
These three pieces don’t cover the technical intricacies of LLM-based application development. Additionally, my analysis doesn’t focus on conventional success factors for innovative products, such as ones described in that article.
Instead, as a product manager, I analyze the unique features of LLM as a platform for my products. This approach offers fresh insights for leveraging unobvious AI capabilities in product development.
Specifically, in this piece, I explore the following questions about software products:
- Why are Generative AI products more prone to becoming obsolete before generating returns?
- How can we transform these GenAI challenges into competitive advantages?
- Which LLM abilities truly enhance product competitiveness, and which ones don’t make much sense?
- How can new AI products stand out when they contain very little code, and therefore a great team of programmers are NOT among the key success factors anymore?
- What skills are most vital for AI product developers in this new landscape?
These insights aim to guide product managers and founders in making their decisions.
So, which AI applications might be redundant or destined to fail 🚫, and which ones have high chances of success ✅?
Please note that the section numbers below continue the section numbering of the previous two pieces of the series. All 11 points are summarized at the end of this piece.
9. Large Applications with Extended Development Cycles and Lengthy Market Adoption Timelines Are Uncompetitive 🚫
Generative AI is evolving at an unprecedented rate, outpacing the growth of any previous technology. The time it takes for AI capabilities to double is roughly one year, a contrast to the two years described in the famous Moore’s Law.
Therefore, GenAI-based products cannot afford long development cycles and extended time-to-market periods. This has three main consequences.
1. New features should be lean and focused, capable of being developed within weeks, not months.
This approach allows for rapid refinement based on initial user feedback, potentially leading to significant functionality changes. Moreover, when a pivot becomes necessary (it certainly will), there’s less sunk cost in discarding features developed during these early weeks.
Consider, for instance, the UI of an LLM-based MVP. It may be unnecessary to develop a custom web interface if users can achieve the same results through a Telegram bot or similar tool.
However, the “product as a whole” can have extensive functionality if we are incorporating LLM into existing solutions or integrating with them. The key here is to minimize the scope of new functionality only.
2. There’s a critical need for ultra-fast experimentation and customer feedback loops.
The speed of GenAI product build is higher, but it’s not always possible to get feedback just as quickly. As a result, some GenAI product concepts may prove too risky.
Rapid experimentation is, of course, beneficial for any new product launch, as it’s impossible to accurately predict market response in advance. Essentially, the market operates as a “black box,” and its behavior can only be truly understood through hands-on experimentation.
In the realm of GenAI products, we encounter an additional layer of complexity — the second “black box” stemming from the inherent unpredictability of LLM output. This dual uncertainty amplifies the importance of frequent and rapid experimentation. The ability to quickly iterate and gather insights becomes not just advantageous, but essential for success.
3. There’s no time to “educate” the product’s target audience, accustoming it to completely new work or leisure patterns.
Only the largest industry leaders, particularly those with their own ecosystems like Google, Apple, or Microsoft, can accustom the majority of potential users to novel concepts relatively quickly.
✅ Consequently, other companies must align with either existing goal-achievement patterns familiar to users, or with trends established by industry leaders.
- Consider an established pattern for the goal of increasing earnings: people purchase training courses to gather new skills. A good AI-driven solution in this domain involves creating these courses using AI, dramatically reducing production costs and, consequently, enhancing competitiveness. No new behavior is required from end users who want to boost their income.
- A recent trend emerging in Apple devices exemplifies an innovation that Apple platform users will undoubtedly adopt: employing a local LLM for typical tasks to safeguard user data privacy. While the specific ways applications might leverage this trend remain unclear yet, I am sure that Apple will provide developers with convenient access to its LLM infrastructure, we just need to wait a bit.
10. Leveraging the Less Apparent LLM Capabilities Enhances Competitiveness and Resource Efficiency ✅
Imagine you’re at the starting point with nothing more than a product concept. To expedite the journey towards a product in high demand, which aspects of your idea should you prioritize for initial exploration?
Clearly, you need to identify a small set of specific end-to-end work scenarioswithin your concept. This aligns with popular product launch strategies: “Start with MVP” (implementing just one or a few scenarios) and “Build for the whole user experience” (ensuring scenarios are end-to-end). The question remains: which scenarios should you choose?
In my opinion, these MVP scenarios should be closely aligned with LLM capabilities. This approach saves resources on the product delivery, as significant product value comes from the LLM itself rather than solely from your developers’ efforts. Failing to do so may lead to challenges like those outlined in section 7 “Overconstraining LLM”.
🚫 LLM’s purported super-powers often include its ability to answer any question. However, the accuracy and quality of these responses are inherently unpredictable, and it leads to problems (refer to section 1 for more on evaluation complexities and quality monitoring). Moreover, a product centered around question-answering can’t effectively compete with market leaders like ChatGPT (as discussed in section 6). Given these two factors, I advise against basing an MVP on this “super-power”.
The LLM’s capacity for “imaginative generation” presents a somewhat more promising avenue. Such creativity of LLMs can inspire our fresh ideas or aid in creating creative content like poems, video scripts, or content plans. However, in my experience, LLM’s creativity alone doesn’t suffice for constructing end-to-end product scenarios. Once a user obtains “creative material” from an LLM, substantial effort is still required to transform it into the desired outcome.
Furthermore, creativity represents one of the most easy-to-understand and widely recognized capabilities of GenAI. It is familiar to nearly anyone who has experimented with ChatGPT or Midjourney, so anyone can become your competitor.
✅ Considering the intense competition, I’d recommend focusing on LLM’s less apparent capabilities, such as:
1. Flipped interaction
This human-AI interaction pattern leverages LLM’s ability to ask good questions or present lists for selecting items important for a user, thereby reducing the user’s cognitive load. Flipped interaction not only helps replace some human work in certain fields (like teaching, mentoring, or coaching) but also aids in establishing the appropriate context for solving problems in any field (more details is available here).
2. Contextual comprehension
LLM excels at grasping the context of user requests and their preferences, then addressing the task within that context. This approach ensures that solutions align with even unformulated user needs.
a. This feature is perhaps most refined in AI copilots for developers, such as Github Copilot and Cursor. In these tools, the LLM’s context encompasses the entire project codebase, whereas the user (developer) typically knows only specific portions. Consequently, developers often cannot consider the broader context when formulating their tasks for AI.
b. Nevertheless, leveraging insights from explicitly stated user needs within the context is also a powerful feature. The language learning platform Memrise, for instance, has effectively implemented this feature.
3. Few-shot learning
The model’s ability to “learn” from a small number of examples allows it to easily adapt to new tasks and contexts. This is why LLM-based chatbots are now widely being implemented in sales and customer support, and chats with them are difficult to distinguish from those with human specialists. In contrast, traditional AI chatbots perform well only in large enterprises and struggle to adapt to evolving knowledge bases.
4. Large-scale information processing
LLM excels at analyzing large quantities of textual and tabular data, distilling it into concise forms. It can generalize, extract key points relevant to the task at hand, identify patterns, and perform various other analytical functions.
a. Take Scite, an AI tool for scientific research, as an example. It goes beyond merely locating query-relevant sources within its billion-citation database. Scite analyzes the context in which an article is referenced, revealing whether the citing paper supports, contradicts, or just mentions the earlier work.
b. When it comes to numerical data processing, LLM outputs don’t require “translation into human language”. This gives GenAI analyzers a distinct advantage over conventional statistical data processing tools.
Many potential competitors may be aware of some of these four LLM capabilities. However, I believe that deeper reflection on these abilities could lead to the development of truly innovative products. This approach could provide a competitive edge over products that solely leverage LLM’s more obvious capabilities like “creativity” and “answering any question”.
11. Small AI Products Grounded in Deep Domain Expertise Are Competitively Viable ✅
LLM functions nearly as a finished product, it can interact with users “autonomously”. Consequently, LLM-based applications are significantly smaller in terms of code base compared to traditional non-LLM applications.
Moreover, any individual with some technical skills can learn to develop a feature-rich LLM-based application within days.
These two factors align perfectly with the rapid development and experimentation requirements outlined in section 9.
However, the small size of the product and the low barrier to entry in GenAI development are significant drawbacks from a competitive standpoint.
For a typical software product with large code base, an exceptional team and agile development processes are crucial success elements. Bill Gross’s research ranks this as the second most important factor out of five, surpassing even the product idea’s viability, which ranks third.
However, how can a product get its competitive edge when its software development scope is minimal, and even inexperienced programmers can develop it?
With ideas and business models easily replicable by competitors… Does success truly depend solely on the short-term advantage of being the first to market in your niche?
- Section 10 offers one answer to these questions: products should leverage LLM’s less-known capabilities. While this doesn’t guarantee success, it increases the chances of outperforming competitors who may not fully understand LLM’s unobvious abilities.
- My previous article outlines another solution: implementing LLM within the product in innovative ways, such as the LLM2 strategy. This kind of know-how is harder for competitors to replicate, as it’s more deeply hidden inside the product.
- The third component of my solution to this challenge is the necessity for a high level of domain expertise.
The importance of domain expertise in product success has been a topic of discussion for years. While I couldn’t find quantitative studies correlating startup success with founders’ domain expertise, I recommend exploring some examples and rationales supporting this significant correlation. Existing studies, focusing solely on unicorns, suggest that founders’ domain expertise is important, though not the primary success factor.
However, I believe that this factor gains substantially more importance in the realm of generative AI. The reasoning behind this opinion is well-articulated in the following post:
For LLM-based products, technical expertise plays a significantly reduced role (due to easier software delivery), unlike traditional digital products where it’s a crucial competitive advantage. Instead, a profound understanding of the domain becomes paramount, as this depth of knowledge is challenging for competitors to replicate.
From a product competitiveness standpoint, I believe it’s important for domain expertise to reside in the same mind that designs the product and contributes to its implementation. Of course, the traditional separation of “tech” and “business” roles in companies has its benefits, as long as they communicate effectively, as such communication results in well-balanced, technically sophisticated and domain-appropriate products. Nevertheless, verbal communication introduces significant overhead. It can take months for techies and businesspeople to understand each other well enough. During this time, market conditions may shift dramatically.
The most efficient and lossless translation of domain expertise into technical implementation occurs when both business and technical visions reside in a single mind. LLMs provide this opportunity by immensely reducing the technical expertise required for product implementation, thus enabling individuals with strong domain knowledge to take a direct role in product delivery.
In my view, when developing GenAI products, technical expertise isn’t limited to programmers; it extends to include advanced ChatGPT users as well.
For example, my friend Askhat Urazbaev independently creates MVPs for his products using AI and even deploys them in the cloud with ChatGPT guidance only. He has never been a professional software developer, and it seems that his AI Power User skills are just as valuable as the ability to read program code.
I’m convinced that generative AI will soon enable domain experts to single-handedly develop products within their domains. To do so, experts should have substantial AI user experience coupled with a foundational understanding of business principles and product design.
Nevertheless, it is not yet clear which specific tools will help us create comprehensive products single-handedly. The concept of an “LLM-driven one-person company” will be the focus of research in one of my upcoming articles.
Summary: Success and Failure Factors for LLM-driven Products
Let’s put together all the ideas from the 3 pieces of this series.
- Applications With High Quality Standards or Costly Quality Monitoring May Fail 🚫
- Specialized Copilots Are in Demand ✅
- Marginal Effort-Saving Apps Don’t Cut It 🚫
- Applications “Smartly” Integrating LLMs into Familiar Workflows Can Cross the Chasm ✅
- New GenAI Products Are Better Suited to B2B and B2B2C than B2C
- Short Lifespan of Applications Enhancing LLM Capabilities 🚫
- Overconstraining LLM: A Recipe for Uncompetitive Applications 🚫
- “GenAI Squared” Products: Unlocking Unfair Competitive Advantage ✅
- Large Applications with Extended Development Cycles and Lengthy Market Adoption Timelines Are Uncompetitive 🚫
- Leveraging the Less Apparent LLM Capabilities Enhances Competitiveness and Resource Efficiency ✅
- Small AI Products Grounded in Deep Domain Expertise Are Competitively Viable ✅
Except for factor #4, the remaining 10 success / failure factors can be applied to new products / to startups.
Below, you can find a scheme illustrating the relations between these 10 factors, LLM capabilities and some features of LLM technology market.
Certainly, only product experimentation can validate such considerations as shown on the scheme. Nevertheless, they can help us be faster by limiting the scope of our experiments. As explained in section 9, there are 2 reasons why high speed of discovery and delivery is even more important for GenAI products than for digital products of other types.
Naturally, no list of success factors can be all-encompassing. Maybe you have encountered other categories of novel LLM-driven products that are not mentioned above but you believe hold potential for success. Please share such product types or features in the comments 🙏