Embracing The 10x AI PM Mentality To Build AI Products
Table of Content
Transitioning from the Lazy AI PM to the 10x AI PM Mentality
Innovating on Product Ideas or Features
Staying on Top of AI Research
Assessing Technical Feasibility of Your Ideas
Bringing Product Ideas to Life with AI
Evaluating and Improving UI/UX
Building Prototypes for Exploring Your UX Concepts
Optimizing UI Copy for Better User Experience
Getting Feedback on UI Design
Designing AI Products
Prototyping AI Features
Designing Effective AI Prompts
Generating Synthetic Data to Train and Test Language Models
Transitioning from the Lazy AI PM to the 10x AI PM Mentality
While designing and building an AI product that leverages Large Language Models (LLMs), I realized how LLM-based assistants like OpenAI's ChatGPT or Anthropic's Claude have become integral to my workflow as an AI Product Manager/Designer.
This led me to identify two fundamental ways LLMs can enhance our work:
Enhancing Core Competencies: LLMs can supercharge your existing PM skills, making you faster at tasks you already excel at. This will help you save valuable time and be more efficient.
Expanding Adjacent Skills: The real magic happens when you combine your PM fundamentals with the vast knowledge of LLMs to develop expertise in adjacent domains – design, engineering, marketing, and more. Suddenly, you're not just a more productive PM; you're a force multiplier for your entire team.
Understanding how LLMs operate is crucial for assessing which PM skills can be safely augmented or outsourced using AI. There is a lot of content encouraging PMs to use AI assistants for core responsibilities like writing Product Requirement Documents (PRDs), crafting user stories, doing market research, responding to customers or categorizing user feedback (all tasks falling under the Enhancing Core Competencies category).
But with language models that can potentially hallucinate fictitious content, does it make sense to rely on them for comprehensive market research or even defining product strategy? Is an AI-generated PRD really going to capture the nuances and customer insights that you, as the PM, should bring to the table? Probably not.
The "Lazy AI PM" path, where we use AI as a shortcut for core PM work, might provide some personal time savings and efficiency gains, but can also be misleading and potentially hurt team output quality with its flawed results.
On the flip side, I believe that the greater opportunity lies in using AI to close skill gaps and become a "10x AI PM" – a PM who combines domain expertise with AI's vast knowledge to benefit not only themselves, but the entire team.
Rather than just using AI to write a document faster, imagine all the ways you can leverage it to expand your skills beyond pure product management and enhance the entire product development process.
What if you start tapping into AI's knowledge to better understand engineering implementation detail, saving innumerable cycles of back-and-forth with your development team and enabling you to make technically viable product decisions without always depending (and bugging) them?
What if you start using AI to rapidly prototype user interfaces and validate usability?
What if you start prompting language models yourself or even generating synthetic training data to build custom AI solutions for your product?
That's the power of combining your PM abilities with AI's knowledge – transcending your core skills to bring unique, cross-functional value that accelerates your team and your product's success.
So don't settle for the "Lazy AI PM" path. Embrace the "10x AI PM" mindset, and use AI's potential to truly augment and expand your skills.
Let me walk you through a few practical examples of how I've integrated AI into my workflows to expand my contributions to our team.
Innovating on Product Ideas or Features
Staying on Top of AI Research with ChatGPT
Throughout my career, I've worked with technologies that were either deeply technical and initially unfamiliar to me (like ASICs and complex hardware systems in my first startup) or so new that everyone was still figuring them out without any established patterns or best practices (like mobile technology in its early days, and AI/LLMs currently).
In each case, my initial step was to learn as much as possible about what the underlying technology could achieve. This would enable me to map these new capabilities to user problems. In particular, I always looked at what a new technology can do that couldn’t be done before. This angle allows to either solving problems in a different, more effective ways, or solving problems that couldn’t be solved before. That’s how you typically end up creating products that “feel magic”.
Specifically to AI, current pace of progress is unparalleled; every day brings a new research paper, a novel prompt strategy, or a fresh capability in areas such as LLMs for language understanding and generation, generative models for image creation or video production, and speech synthesis models for voice generation. Staying current with these advancements presents a significant challenge, but it’s absolutely necessary.
So whenever I come across an interesting research paper, I first upload it to ChatGPT to obtain a detailed summary that goes beyond the paper’s abstract. This initial step helps me determine whether the paper warrants a full read based on my needs. If it does, I dive into it, relying on ChatGPT for clarifications on any complex sections or to explore how the paper’s insights might be leveraged in my work.
This approach not only streamlines my research process by filtering out less relevant papers but also spares my team’s researchers the effort of addressing my questions. When I uncover insights that could enhance our product, I share them with our development team, providing them with my notes and the paper for further examination. This system ensures we stay on the cutting edge without getting overwhelmed by the sheer volume of information.
To be clear, understanding of what the technology is capable of is just the first step; then, the key is always starting with user experiences and working backwards to identify which technological component could offer a solution. Just because an LLM can summarize content, it doesn’t mean you have to put summarization button in every single view of your app or have an infinite list of ways you can transform text in your note taking app; just because a model can generate images, it doesn’t mean you have to offer it in your video conferencing product.
Today too many random AI features are added to incumbent products “just because the tech is there”. Echoing Steve Jobs, "You've got to start with the customer experience and work back toward the technology, not the other way around."
Assessing Technical Feasibility of Your Ideas
Sometimes the design process is the opposite: you start with an idea and you need to determine if its implementation is both feasible and user-friendly. A common scenario involves investigating whether one or more APIs exist to achieve your goals, understanding their capabilities, and determining the user experience required to integrate such an API into your product.
Typically you might ask your development team to conduct this research. You would wait for their feedback, pose additional questions, and start over if the proposed solution is not feasible. If you're familiar with API documentation, you could minimize some back-and-forth by consulting it directly. However, let's be realistic: aside from a few exceptions, API documentation tends to be disorganized and poorly written, containing an abundance of details that, while necessary for engineers, can slow down a product manager in finding answers to their questions.
I now turn to ChatGPT to ask questions about APIs all the time. For popular APIs, providing the API name and context is enough for ChatGPT to answer my questions; otherwise, I might share a link to the documentation for ChatGPT to review, or I directly paste the relevant content into it.
I then just ask all the questions that I would ask my development team and within minutes I have a very good sense of what can be done with that API and whether it meets my requirements. If it doesn't, I also leverage ChatGPT's knowledge to explore alternatives or workarounds, whether through different approaches or by modifying the feature.
While my engineering background offers some advantage, I’m confident that this type of interactions with ChatGPT can help any PM to increase their contribution to the team and significantly boost team productivity. And let's be honest: saving engineering time and being technically prepared, rather than clueless, will significantly increase your ability to contribute and ultimately your credibility as a PM 😉.
Bringing Product Ideas to Life with AI
With ChatGPT, you can take your creative process to the next level.
I recently read about an AI mobile app that has become a revenue-generating powerhouse despite its cumbersome UX and relatively simple workflow. Users take a screenshot of a chat message or social media interaction, switch to this app to upload the screenshot, the AI analyzes it to generate a response; users then copy this response back into the original app.
Curious, I wanted to know why they didn’t offer a more streamlined experience:
Users copy the message from within any app
Without leaving the app, users tap a custom “Reply” keyboard button, which:
Takes the text copied in the clipboard and any additional user input to generate a response through a LLM
Inserts the generated text into the app’s input field for the user to review and send.
I haven’t built mobile apps in 7-8 years, so I didn’t keep up with the capabilities of mobile OSs. Following process described above, I decided to turn to ChatGPT to assess feasibility or identify limitations.
I had so many questions. I knew you could build keyboard extensions and custom keyboards, but didn’t know the nuances. I didn’t know the details of clipboard access as I was sure there are several privacy restrictions. I wanted to explore the possibility of letting the user take a screenshot of the message (”physically” simpler than selecting a message and copying it) and have the app access it directly.
With few exchanges with ChatGPT I was able to get most of my questions answered, down to the lowest implementation details, privacy limitations, and user interaction options. Incredibly useful. However, ChatGPT also offered some incorrect information or suggestions. But if you have some background in iOS and app development, these errors are easy to identify - just don’t blindly trust ChatGPT.
I then started to ask coding question on how to build it, learning about UIInputViewController and UIPasteboard classes, along with the textDocumentProxy property for inserting text into the current input field.
At that point I said to myself "Why not try coding this on my own? It seems like a relatively simple app!” The problem was that I didn’t have Xcode installed on my new Mac, I hadn’t used it in years and I had never setup a project or written code from scratch on my own. But then I thought, "Hey, if ChatGPT is turning everyone into developers, why not me?" So, I decided to dive in and give it a try.
It was a very fascinating experience. I was guided through the Xcode installation process (which can become cumbersome), advised on choosing between SwiftUI and UIKit by weighing their pros and cons, and assisted in configuring the entire project; this included skipping the “Add Core Data” option for the time being, adding a Git Repository (and fixing an error message related to it), learning how to link it with a GitHub repository and so on. Here I often directly uploaded screenshots of Xcode, whether of its user interface to figure out the next steps, or of project configuration details to decide what to select or enter. This approach proved incredibly effective, and soon enough, I was able to begin coding — or, to be more accurate, ChatGPT was doing the coding. But that's a story for another post.
This example shows how there are really no excuses for not validating your product ideas, allowing you to explore them as thoroughly as you wish.
Evaluating and Improving UI/UX
Building Prototypes for Exploring Your UX Concepts
Once you overcome the technical hurdles of building a feature, it’s fundamental to craft the smoothest user experience (UX) for your product. Also here, ChatGPT can be helpful.
Exploring a new UX concept often requires firsthand interaction to assess its usability. While you could delegate the prototype development to a team member, product managers understand (or should understand) the challenge of interrupting developers to create something when its feasibility and usability are uncertain. With ChatGPT you can take on this task yourself, even without much coding expertise.
Some time ago I wanted to add to our product an input field that combined editable text with a dropdown menu, where options would appear upon click and that could be deleted like any text. Since I couldn't find another product with this type of interaction to test its feasibility and usability, I decided to bypass the usual process of drafting specifications, assigning tasks, and worrying about potentially wasting developers' time and resources. Instead, I chose to build it myself using ChatGPT.
While I used to code, it has been years since I moved into product management, and in terms of web front-end development, I barely built/edited my companies websites. As such, I'm pretty sure anyone could achieve the same results I did in this experiment.
I first prompted ChatGPT with the details of the interaction I wanted to build, much like how I would describe it verbally to an engineer — nothing fancy at all. Then I asked ChatGPT to code it and to give me all the instructions to run the prototype on my browser. While I already knew how to do it, I wanted to verify the accuracy of the instructions (which were correct). The lesson learned here is: 'Don’t be afraid to ask.'”
Did the code work at the first attempt? No. But it was easy to make progress: “I got this error”, “It's behaving like this, but I expected it to do that”, “Can you change this?”. At the time, GPT-4V wasn't available yet, but now I would certainly use screenshots to illustrate the issues. In the future, I may even upload a quick video of the interaction to demonstrate the current status and request specific flow changes—just as if I were showing them to a teammate.
Through a few iterations with ChatGPT, I developed a fully functional prototype that not only allowed me to test the concept's feasibility but also to fine-tune its usability. Having a tangible prototype, including interactive elements and the underlying code, greatly simplified the process of explaining and transitioning this new UX concept to the engineering team.
I believe this hands-on approach streamlines the exploration of UX ideas, enabling quicker iterations and more effective communication of innovative designs.
Optimizing UI Copy for Better User Experience
I often turn to ChatGPT for brainstorming and fine-tuning UI copy. This includes everything from the labels on buttons and step-by-step onboarding guides to compelling call-to-action phrases, timely notifications, and clear error messages.
Not all tasks are suitable for ChatGPT, but understanding how LLMs function helps in both selecting the appropriate ones and optimizing them to achieve specific goals. ChatGPT generates responses by predicting the most probable and contextually relevant continuation to a given prompt. This capability allows me to obtain standard text that users might already be familiar with, based on similar software. Such familiarity in UX design is crucial as it helps users quickly understand what to do. And if I want to tweak it a bit to align with our app's style or branding, I take the original output and then ask ChatGPT to adjust it to be more concise, funnier, punchier or whatever.
When crafting your prompt, describe the UI element or interaction you need help with as precisely as possible and clearly define your objective for the copy. In some cases, uploading a screenshot of your current UI can be beneficial to ensure the new copy complements the existing style; just make sure to mention style consistency as a goal and voila!
Getting feedback on UI Design
In the startups where I've worked, UI design was often within my scope of responsibilities. Even when we had a dedicated designer, I always collaborated closely with them to guide the overall design direction and provide hands-on help.
I've found ChatGPT to be a valuable tool for obtaining initial feedback on UI designs. By simply uploading a screenshot, I can request feedback on various aspects such as the UI layout, color scheme, copy, and other areas for improvement. This early review process is invaluable before sharing the designs with the team or early users, as it helps bridge any gaps for a product manager turned designer often out of necessity.
However, it's worth noting that feedback from models like GPT-4V, the one I used, tends to be generally positive. This can give a misleading sense of satisfaction with your work. To get around this, I've learned to explicitly ask for more critical and straightforward feedback. For example, after receiving a glowing review for an unfinished design, I prompted ChatGPT to critique it as if Steve Jobs were reviewing it:
Be like Steve Jobs - I'm sure if he saw this, he would not be happy AT ALL
This approach yielded much more direct and useful feedback. Knowing the design was incomplete, the critique I received was both expected yet valuable, helping me to refine the UI further.
Designing AI products
Prototyping AI Features
Of course, ChatGPT is an indispensable tool for anyone working on an LLM-based AI product.
During this stage of AI development, we're all experimenting with various models and we often don't know what will work until we try it. ChatGPT has proven to be incredibly useful for quickly testing a wide range of AI features, especially for the intermediate steps involved in their development.
For example, I wanted to develop a feature where the language model would extract specific information from emails based on a provided schema and then generate a JSON output with the appropriate key-value pairs. I knew LLMs were quite capable of this parsing process, but I wanted to test it with actual emails I received; I was also interested in the quality of the JSON output, given that everyone seems to struggle with ensuring its accuracy, often using tools like guardrails or Python's Pydantic library for structure and validation. However, I wondered if all these measures were necessary for my case, given the relative simplicity of the output I wanted to be produced.
Technically, I could have asked my team to retrieve emails from my Gmail account, prepare them to be fed into the LLM, iterate on a few prompts, and finally display the desired output in a consumable format for me to review.
Instead I decided to replace that entire workflow by leveraging ChatGPT, significantly cutting down on both communication and development time for our team. I simply pasted an entire email including its headers into ChatGPT, I added my prompt, and I instantly obtained the result I was looking for. This hands-on method allowed me to iteratively refine the prompt, the schema format for the input, and the desired output format. I then tested several other emails to ensure the concept was robust and the outcomes were consistent. Additionally, I experimented with switching between different models directly in ChatGPT to explore potential cost savings and performance enhancements without compromising result quality. At the end of this process, I was confident that the idea was ready to be brought to engineering for actual implementation into our product.
Depending on your scenario, you can easily adapt this workflow and enhance the entire team productivity, while having direct control over the output of your feature.
For advanced prototyping, I also use the OpenAI Playground to select from a wider set of models, adjust model parameters for more precise control over interactions, and access in-depth model analytics. Initially, Playground was definitely my go-to tool, serving as my primary tool for experimentation, and it has only gotten better over time. However, ChatGPT has largely taken over due to its convenience and versatility, and now I use Playground more as second level optimization after I get initial results from ChatGPT. From a practical standpoint, I also prefer the threaded history of ChatGPT, missing in Playground, as it enables me to revisit and iterate on my experiments even days later.
Designing Effective AI Prompts
One of the most common tasks in AI product design is the crafting of effective prompts for language models, a.k.a “prompt engineering”.
As an engineer by education, the term "prompt engineering" has always felt slightly off to me. Engineering is grounded in applying mathematical and scientific principles towards developing solutions that are both reliable and reproducible. As a designer, the term "prompt design" resonates more with me, as design is about harnessing creativity and adopting a human-centered approach to craft solutions that are effective, intuitive, and engaging.
In the context of "prompt design," this means developing prompts that not only communicate effectively with AI systems, but also produce responses that are contextually appropriate, nuanced, and in tune with user expectations. This is why I believe that, while historically the task of prompting LLMs has been primarily the domain of engineers, it's a responsibility that product designers should start assuming: in a way, prompt design is the new UI design.
Can ChatGPT help product managers generate effective prompts?
A prompt is a structured input that guides the model in generating a specific output. It serves as an instruction or a cue, framing the context and specifying the desired form or content of the model's reply.
Based on this definition and how LLMs work, it becomes clear that ChatGPT is great at helping you generate well-structured and effective prompts. Moreover, ChatGPT itself has been trained on a wide range of examples and best practices on how to prompt LLMs, making even the vanilla model really useful at generating effective prompts.
If you have already developed your own prompting style and approach, or if you have particular prompting techniques you discovered, you can build a Custom GPT that has them as part of the Knowledge base. I myself created a document that I can constantly update with new tricks I learn, discover or find and I use it to guide my prompt generator GPT.
Whether you're using the standard ChatGPT or a Custom GPT model, just ask it to create a prompt adhering to the best practices and rules of prompt design. Be sure to describe in detail the input for the prompt and the output you expect, including the precise format.
Once you are satisfied with the prompt generated, utilize a new ChatGPT thread — I like to start fresh to prevent any influence from prior chat contexts on subsequent outputs. Test the new prompt with this instance and observe if it yields the expected result. If not, return to the previous thread and iterate to address the issues you encountered.
Generating Synthetic Data to Train and Test Language Models
Another responsibility I believe AI product managers should take on, or at least contributing to, is the training of models, particularly in the case of models like GPT (Generative Pre-trained Transformer). In application-layer training scenarios, whether through few-shot learning or fine-tuning, the key is to compile and curate the right set of training examples. Frankly, it’s almost an art and with a well-crafted training set you can have a major impact on the quality and accuracy of the model.
Training an AI model is similar to the process of refining a product feature, where adjustments are made based on user feedback or data insights. In AI training the model is “tuned” with training examples to correct errors and improve accuracy of the model. The objective in both scenarios is to achieve a superior user experience, whether through a more accurate AI model or a more effective product feature. This process involves a cycle of testing, feedback, and adjustments, aiming for continual improvement and optimization.
As such, PMs training the model makes also a lot of sense in the entire product development cycle. I assume that PMs, as ultimate responsible for the product output, already test the model themselves, even if a QA team is available. Therefore, learning how to train it can significantly reduce turnaround time: when an issue is found, add or modify training examples, rebuild the model, verify whether the changes have made an impact. It’s really that simple.
To make this work efficiently, ask your development team to build a basic internal tool that allows you to modify the training set and rebuild the model. This can be as straightforward as utilizing Google Sheets for the training examples and a button to rebuild the model; you also need a batch tool to evaluate the model with various test sets. Investing in such tools is well worth.
The problem now is how to create quality training examples. As ChatGPT’s job is to generate text, it should be no surprise that ChatGPT can help you scale your effort with a proper prompt. Let’s say you are working on a conversational interface for workflow automation. You can prompt ChatGPT with a list of apps (e.g. Gmail, Google Calendar, Stripe, etc.) and ask to generate user queries representing workflows between those apps. This context helps ChatGPT narrowing down the scope and use its knowledge to produce highly relevant examples in natural language. Additionally, providing a few examples of the format you have in mind, as well as requesting paraphrased versions can help you cover the variety of ways users will input their queries.
Of course you can also programmatically scale this process, but only after you refined with ChatGPT the prompt you are going to use and validated the quality of the synthetic data being produced.
How are you using AI to enhance your adjacent skills at work? Leave a comment for our community!