Implement a similar retrieval technique that is used for perplexity pro search where you have an LLM break down the question into multiple parts, generate and execute search queries for each part, then send all of this context to the final LLM to generate a response to the user.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Perplexity API for @Web queries | 0 | 423 | October 26, 2024 | |
Update Search Web Feature | 0 | 95 | November 17, 2024 | |
Feature Request - Add search result files in batch as chat context for better AI assisted refactoring | 1 | 72 | January 29, 2025 | |
Feature request: Formal dependency context building/crawling to learn new APIs | 0 | 74 | July 8, 2024 | |
Composer Self-Keyword search for context | 1 | 19 | February 19, 2025 |