The application worked very well at first.
For some reason, over time, he should have become better and smarter, but you taught him more cunning.
Slow requests work very well in the free version, but when the pro version is finished, for some reason slow requests do not respond at all.
He’s constantly pushing you to buy and slow requests why did you remove the countdown timer.
If the requests are limited, why does he corrupt the code he wrote properly? Is it to cause us to send unnecessary requests?
cloude-3-5-sonnet
While it used to work like a superior intelligence, now it’s nonsense.
The most interesting part is that the upper version is only terrible in writing good code in design. I have never seen anyone write code that works properly. I print the codes with version 3.5 and I use the upper version for design.
What I don’t understand is why the artificial intelligence, which previously gave good answers, now confuses the things it knows.
Isn’t 500 requests too few for an artificial intelligence model that constantly writes complex code?
priority advance chat history
No chat history after switching to pro
Previously, my chats were visible in the past section, but now they are not.
Slow requests, times change all the time, and when the chat history is more than 2 or 4 days, there are crashes, it closes on its own and starts from scratch.
example TIC –TAC-TOE
You make the game, it creates the files, the structure, the logic, it all sets up, but when you make an error in the middle of the project, you forget the old code structure, you always write new codes, and when you don’t realize this, you get into an error deadlock.
in simpler words
function familyName($fname, $year) {
echo "$fname Refsnes. Born in $year <br>";
}
familyName("Hege", "1975");
Let’s say he used the above function in the project. After some time has passed, when he needs it, he doesn’t use it, he creates a new one.
function Name($fname, $year) {
echo "$fname Refsnes. Born in $year <br>";
}
Name("Hege","1975");
Hey, this is usually a mixture of a long conversation and a lack of context!
While we try to do some magic behind the scenes to allow your conversation to not hit a length limit where we can, the usual LLM issue of “forgetting” older messages does happen as the chat grows longer. Therefore, with enough messages between the AI writing that function, and the code to use that function, the AI will not know of it!
The best fix here would be to ensure you @ the relevant file as context for the AI when it needs to use it, as this ensures it has visibility of the code and, more specifically, the method it wrote already.
Additionally, it’s good practice to regularly start a new Composer session, both to ensure the conversation stays close to one topic where possible (mixing streams of work gets confusing for LLMs) but also so you can be sure the LLM sees all the relevant chat history.
I understand, thank you. I only use chat because I can’t get used to the composer area. @ I don’t use it very much. What I want to emphasize is that it is rare. When the chat history is full, the application closes and opens on its own. For example, if there is a 2-week chat, the application slows down and closes and reopens itself.