Implement a similar retrieval technique that is used for perplexity pro search where you have an LLM break down the question into multiple parts, generate and execute search queries for each part, then send all of this context to the final LLM to generate a response to the user.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Perplexity API for @Web queries | 0 | 105 | October 26, 2024 | |
Update Search Web Feature | 0 | 18 | November 17, 2024 | |
Feature Request - Add search result files in batch as chat context for better AI assisted refactoring | 0 | 21 | October 22, 2024 | |
Feature request: Formal dependency context building/crawling to learn new APIs | 0 | 32 | July 8, 2024 | |
Perplexity hotkey | 0 | 41 | November 8, 2024 |