Site icon TechTalks

How to turbocharge your product and market research with DeepSearch

One of the key areas where AI companies are competing is “Deep Research,” where you give an AI agent a task and it goes and searches the web for relevant information, giving you back a detailed report that would have taken you hours to accomplish. 

Currently, OpenAI, Grok, Perplexity, and Gemini provide Deep Research tools (there are also open source platforms like Open Deep Search as well as research-focused products such as Manus AI). When used well, they can become a super power for product managers in product and market research.

How Deep Research works

Unlike normal queries to AI tools like ChatGPT, where you expect an instant response from the model, Deep Research takes a prompt and does a lengthy probe that can take several minutes, going through web pages, reasoning over their content and gradually curating a detailed report with links to sources.

The more detailed your prompt is to Deep Research tools, the better chance you have of getting the results you want. Moreover, with new reasoning models such as OpenAI o3-mini and Gemini 2.5 Pro, you can continue the conversation with deeper question-answering. 

How I use Deep Research in product work

I have run several experiments to see how to make the best out of Deep Research for product and market research. Here is the formula that works best for me:

1- I start with a problem statement. I run it by a large language model (LLM) such as ChatGPT or Gemini to turn it into a “jobs to be done” statement.

2- I give the JBTD statement to Deep Research and ask it to research the current solutions for the problem and the potential pain points that have not been addressed by current solutions.

It usually returns a very detailed answer that contains the kind of information that would take me hours to gather. 

I usually iterate on the answer one more time with a reasoning model (e.g., o3-mini-high) to create a final table that compares the existing solutions. 

Here’s an example:

I started with the following statement:

“Right now, there are a lot of different LLMs that can do various tasks. Even a single LLM can do multiple tasks when prompted in different ways. Currently, when I want to do a multi-step task that requires different skills, I have created different prompt templates for each skill. I enter my request into the first template and submit it to the model of choice. Then I copy-paste the output into the next prompt template and send it to a new chat session (or another model). This solves my problem but is not very user-friendly. I’m thinking about creating a no-code platform that enables you to create custom prompt pipelines that allows you to create and connect different prompt templates. You should be able to provide custom instructions for each step of the pipeline and adjust different settings, such as which model it will use as well as more advanced settings such as temperature and output format. It will have a user interface and a toolbox that allows you to drag and drop different templates or create your own. You should also be able to bring in resources such as LLMs and custom data, which you can feed to your models. You should be able to save your pipeline and load it as an application. The goal is to enable product managers and developers to easily create prototypes for LLM applications without the need for extensive coding.”

I prompted OpenAI o1 to turn it into a JBTD statement, which gave me the following:

“When I need to build or experiment with a multi-step LLM workflow, I want a no-code platform that lets me visually create and connect different prompt templates, configure model settings, and integrate custom data, so I can quickly prototype LLM applications without writing code or manually shuffling outputs between models.”

And then I gave the JBTD statement to OpenAI Deep Research with the following instructions:

1- What solutions currently exist for this problem

2- What are some of the potential pain points for PMs that a new product can address

Interestingly, before doing its research, it asked me four clarifying questions, which I found to be very relevant. After answering them, it worked for 11 minutes and came back with a very detailed report of different no-code LLM tools for startups and enterprise applications.

Finally, I used o3-mini-high to summarize the key features of the solutions into a table. I still spent several hours going through the analysis and the sources that the model had cited. I also had to play around with some of the tools that the model had found which were new to me. But it performed crucial work that would have easily taken me several working days. At the very least, I found out that the problem that I had been facing was solved in some ways and if I wanted to come up with a product idea, I had to find a new angle.

You can see the full Deep Research chat here.

What to expect in the future

Looking ahead, the capabilities of these Deep Research tools are only set to expand. We can anticipate faster processing times, more nuanced reasoning, and potentially tighter integrations with product management and data analysis platforms. 

Imagine AI agents not just gathering information, but proactively identifying trends or suggesting research avenues based on ongoing product goals. For example, once you start a Deep Research task, you can instruct the agent to periodically scan the internet for new trends and get back to you with updates through email or Slack.

As these tools mature, they promise to become even more indispensable partners in the product development lifecycle, moving beyond ad-hoc research tasks to become embedded strategic assets.

If you’re interested in learning to create LLM applications, GoPractice has a fantastic GenAI Simulator course that gives you the perfect framework to think about generative AI and what kinds of problems you can solve with it. If you want to learn more about ML product management in general, you can try their broader AI/ML Simulator for PM course. I highly recommend both courses.

If you want to develop an LLM application for your organization but don’t know where to start from, contact me.

Exit mobile version