8 hours of screen recordings with GPT-5

Here are 8 hours of my screen recordings using GPT-5 within Cursor. It just has music in the background, no talking. GPT-5 is now my daily driver and I haven’t switched since. It is accurate, does what I ask and doesn’t veer from the plan. My prompts are long with it, but I prefer ensuring it knows exactly what I am needing.

The work in these videos are taking place in a TypeScript frontend + backend monorepo with only 1 Cursor instance. I am creating an application which will be a CRM system, but the plan is that everything you can do in the UI, you will be able to do in the Chat interface like an agent.

I hope these videos help people and give them some examples of how to prompt GPT-5 as it is abit different to Claude models. There are the previous 17 parts which use exclusively Claude models. You can also find these on the channel.

Also thank you to the Cursor team for providing such an incredible experience which makes it so fun to create software! Keep going strong! :flexed_biceps::partying_face:

Frontend: React 19 SPA with Vite and MUI
Backend: Node.js Express with Zod, Prisma

P.S. I am currently unemployed, so if you are a team who use Cursor and need another engineer, please get in touch! I am UK based but I am happy to relocate or work remote. Thank you!

Part 18
In this episode, I implement the [task] entity into my application do some refactoring, ensuring [time] must now linked to a task and also tasks show under projects. This contains full implementation across backend and frontend.

https://www.youtube.com/watch?v=_n7ASmpVVqU

Part 19
In this episode, I am implementing the new gpt-5, gpt-5-mini and gpt-5-nano models in to the application. I have made the choice to just support these 3 models going forward when I do decide to release the software (either open source or sell - I am not sure on this yet.)

https://www.youtube.com/watch?v=L2Ys-jvtRPg

Part 20
In this episode, I was refactoring how I’m using the OpenAI npm package. For a prototype MVP I am trying to complete as quickly as possibly, it has too many abstractions. So the goal was to simplify. We did some package research and also documentation into md files.

https://www.youtube.com/watch?v=Em0AbdZwi4o

Part 21
This part was working on fixing a bug where the link to the OpenAI call wasn’t showing correctly from the user and assistant messages on the frontend. Alot of refactoring and fixing bugs in this episode, which is always fun and satisfiying to do!

https://www.youtube.com/watch?v=1hyO4wnATjM

Part 22
This episode was very cool. I needed to start adding GPT-5 reasoning functionality to my application, so I had Cursor go off and do some research on the [openai] npm package so he know exactly what he needed to have in context to answer the questions.
We also fixed a security issue where secrets could be shown back on the UI, when they should only be allowed to be edited and only ever decrypted for backend purposes.

https://www.youtube.com/watch?v=515OjKiu71Q

Part 23
This episode was also very cool. We implemented reasoning summary down to the react application. I was very impressed with this work by GPT-5.

https://www.youtube.com/watch?v=WeizYA7_v5U

Part 24
More package research. We are figuring out how we pass the encrypted reasoning items back in the OpenAI chain as we are not using stateful requests (not providing previous id, as eventually we want to do alot of context manipulation).

https://www.youtube.com/watch?v=FfFQ4jbYxUg

Part 25
More work on passing back the reasoning tokens back to OpenAI when we make the API call. This was completed in this episode.

https://www.youtube.com/watch?v=f1c4uCFPANk

Part 26
In this episode, we added the ability for a chat to be a “project chat”. We added a nullable column to the chat table which was called project_id. If the chat has a project_id, it will be scoped to all related items from that project such as tasks, clients, contracts, notes, tag which will be available in the context ready for when we add tools, or if they want to simply have a conversation to see where things are.

https://www.youtube.com/watch?v=1D5tUPXH1Vs

Part 27
Finally getting where I want to be! We are now working on injecting project based context into the project chats. It seems to be working very well so far. I passed Cursor details from the GPT-5 prompting guide, we made some adjustments.

https://www.youtube.com/watch?v=AP48wrPWIGI

Here are the package.json files for both frontend and backend so you get a feel of the technology I’m working with.
Here is the frontend package.json:

{
  "name": "frontend",
  "private": true,
  "version": "0.0.0",
  "type": "module",
  "scripts": {
    "dev": "vite",
    "build": "tsc -b && vite build",
    "lint": "eslint .",
    "preview": "vite preview"
  },
  "dependencies": {
    "@emotion/react": "^11.14.0",
    "@emotion/styled": "^11.14.0",
    "@fontsource/roboto": "^5.2.6",
    "@mui/icons-material": "^7.1.1",
    "@mui/lab": "^7.0.0-beta.13",
    "@mui/material": "^7.1.1",
    "@mui/x-data-grid": "^8.5.3",
    "@mui/x-date-pickers": "^8.9.2",
    "@types/react-router-dom": "^5.3.3",
    "@types/react-syntax-highlighter": "^15.5.13",
    "dayjs": "^1.11.13",
    "prism-react-renderer": "^2.4.1",
    "react": "^19.1.0",
    "react-dom": "^19.1.0",
    "react-markdown": "^10.1.0",
    "react-router-dom": "^7.6.2",
    "react-syntax-highlighter": "^15.6.1",
    "rehype-highlight": "^7.0.2",
    "remark-gfm": "^4.0.1",
    "socket.io-client": "^4.8.1"
  },
  "devDependencies": {
    "@eslint/js": "^9.25.0",
    "@types/react": "^19.1.2",
    "@types/react-dom": "^19.1.2",
    "@vitejs/plugin-react": "^4.4.1",
    "eslint": "^9.25.0",
    "eslint-plugin-react-hooks": "^5.2.0",
    "eslint-plugin-react-refresh": "^0.4.19",
    "globals": "^16.0.0",
    "typescript": "~5.8.3",
    "typescript-eslint": "^8.30.1",
    "vite": "^6.3.5"
  }
}

Here is the backend package.json:

{
  "name": "backend",
  "version": "1.0.0",
  "description": "TypeScript Express API Server",
  "license": "ISC",
  "author": "",
  "type": "commonjs",
  "main": "dist/index.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js",
    "dev": "tsx watch src/index.ts",
    "dev:nodemon": "nodemon --exec tsx src/index.ts",
    "clean": "rm -rf dist",
    "type-check": "tsc --noEmit",
    "lint": "echo 'Linting not configured yet'",
    "test": "echo \"Error: no test specified\" && exit 1",
    "db:pull": "prisma db pull",
    "db:generate": "prisma generate",
    "db:studio": "prisma studio"
  },
  "keywords": [
    "express",
    "typescript",
    "api",
    "backend"
  ],
  "dependencies": {
    "@prisma/client": "^6.11.0",
    "@types/cookie-parser": "^1.4.9",
    "@types/jsonwebtoken": "^9.0.10",
    "argon2": "^0.43.0",
    "cookie-parser": "^1.4.7",
    "cors": "^2.8.5",
    "dotenv": "^17.0.1",
    "express": "^5.1.0",
    "helmet": "^8.1.0",
    "jsonwebtoken": "^9.0.2",
    "openai": "^5.12.2",
    "prisma": "^6.11.0",
    "socket.io": "^4.8.1",
    "zod": "^3.25.67"
  },
  "devDependencies": {
    "@types/cors": "^2.8.19",
    "@types/express": "^5.0.3",
    "@types/node": "^24.0.10",
    "concurrently": "^9.2.0",
    "nodemon": "^3.1.10",
    "ts-node": "^10.9.2",
    "tsx": "^4.20.3",
    "typescript": "^5.8.3"
  },
  "engines": {
    "node": ">=18.0.0"
  }
}
3 Likes

Amazing you’re very talented.
Thank you for the wonderful training material.

2 Likes

I’m trying to understand from your videos how you decide when to use which model but I can’t figure it out.
You use GPT-5 a lot, the weaker one, and of course the cheaper one.
But from the tests I did even with external tools — although you seem to save money, in the long run you actually lose it,
because you get code that’s less high-quality and less polished.

It would be nice if you could write a little about how you choose which model to use.
What did you learn from this?
For me, written material like that would help more than just a video where I watch you code even though that’s nice, and the music is very beautiful.

I also notice that you spend time finding the components you want the model to work on.
I never bother with that I just let it find the relevant files by itself, and it does a pretty good job.

And I use the expensive model.
You use a model that’s either free or almost free.
Why do you go to all that trouble?

Sorry if these questions are off-topic I’m just very curious.
I hope you find a really good job soon.
Whoever hires you will surely gain a very successful employee.

2 Likes

Thank you for taking the time to view, I appreciate it :smiley:

For this specific set of videos in the series, I selected GPT-5 on purpose to test run it and showcase it’s strengths and weaknesses. In some of the videos I was using [gpt-5-fast] and in the latest videos I switch to [gpt-5-high-fast]. For me personally, as the codebase already has significant patterns already in place, gpt-5 has produced code I am more than happy with as it is quite hard to veer from the path already set. There are sometimes I will ask to go back and refactor once we have implemented the main business logic, but most of the time I am happy with it.

In regards to passing context and files, I like to give the model as much of a chance as possible and if I provide it as much as I know I can, I always get better results. But I do encourage it to go off and use the reading/searching tools to go and find more based upon what I’ve provided or what it deems necessary. I would rather spend the extra time giving the context and have a higher chance of success.

Previously, I used Claude Sonnet 4 in mainly non-MAX mode. This got me very far but I started to notice it would not fully follow 100% the specification I laid out and would start to add more features, or veer off path. With GPT-5 models, I haven’t noticed this once - it is very good at following instruction.

GPT-5 feels different in a good way also that I can’t put my hand on - it has a different personality :smiley:

And nope nothing is off-topic, feel free to ask anything, I am more than happy to answer!

2 Likes

That’s really fascinating what you’re saying — because I came to the same conclusions.
GPT-5 is for programmers!
Someone who knows what they want and doesn’t want the model to touch code they didn’t ask about.

Sonnet is for someone who doesn’t understand much about it and wants the model to also handle all the things they might forget to ask for.
For me, as a programmer, it’s sometimes frustrating when it makes a lot of changes I never asked for.

Do you know that officially FAST is the same model — just more expensive?

That’s interesting too — I always weigh whether to give it everything up front (which raises the cost for me), or to let it search only for what it feels is “necessary.”

Last question:
What is actually your conclusion about working with TS and React?
And what are the differences between the different versions of GPT-5?

1 Like

I like to use the fast model. I don’t think I will hit the limit of my plan based on projected usage, so I am happy to pay more for the extra speed.

Between [gpt-5-fast] and [gpt-5-high-fast] I honestly haven’t noticed any difference at all. The work I’m doing isn’t particularly hard for Cursor to complete. I know I’m in good hands with these models.

I think TypeScript works great with Cursor. The LSP and linter integrate nicely for picking up errors, and there are patterns everywhere for the LLM to follow.

I do prefer Angular over React and frontend isn’t my strong area, but I chose React because there is maybe so much more training data for it and I find with using a opinionated UI library like Material UI, the code is always spot on. I don’t think I’ve manually written a line of frontend code on this project, which is wild to say! That is the benefits of working across a full TypeScript monorepo as I can give context from the backend schema and the frontend can follow very nicely.

Here is a post from a while back explaining alot more which you may find useful:

2 Likes

Thank you for the detailed answer — I understand you’re on the Ultra plan, fascinating!
I’ll take a look at the second post.

1 Like