Fugly code block

After 5 months it’s clunky as ever.

That is ugly.

Why not concatenate the code blocks?

Could you do me a favor and the next time you see this copy the full chat response (bottom right of the response, there is a copy icon you can click) and paste it here

k, system_prompt, user_prompt, max_tokens=2048, temperature=0.0)

            if translated_chunk is None:
                raise RuntimeError("Translation failed. Terminating program.")
            translated_chunk_path = os.path.join(translated_chunk_output_dir, f"{base_filename}_translated_part{i + 1}.txt")
            with open(translated_chunk_path, 'w', encoding='utf-8') as file:
            final_translated_content += translated_chunk + '\n\n'
        write_translated_chunks(final_translated_content.split('\n\n'), output_file)
        print(f"Chunk directory: {chunk_output_dir}")
        print(f"Translated chunk directory: {translated_chunk_output_dir}")
        print(f"Final translated file: {output_file}")
    except Exception as e:
        print(f"An error occurred during processing: {e}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Translate a .txt file from Japanese to English using OpenAI API")
    parser.add_argument('-i', '--input_file', type=str, required=True, help="Path to the input .txt file")
    parser.add_argument('-o', '--output_dir', type=str, required=True, help="Directory to save the intermediate and translated .txt files")
    parser.add_argument('-p', '--prompt_file', type=str, required=True, help="Path to the prompt file")
    parser.add_argument('-f', '--output_file', type=str, required=True, help="Path to the final translated .txt file")
    parser.add_argument('-t', '--token_limit', type=int, default=800, help="Token limit for each chunk")
    args = parser.parse_args()

    main(args.input_file, args.output_dir, args.prompt_file, args.output_file, args.token_limit)

Summary of Changes

  • New txt_splitter.py: Created to handle reading, splitting, and writing .txt files based on a token limit.
  • Updated openai_translator.py: Modified to use txt_splitter.py for handling .txt files, including splitting, translating, and writing the translated chunks.

By making these changes, the script should be able to process .txt files, split them into chunks, translate each chunk, and then write the translated chunks back to a .txt file.

That is actually reversed in the GUI like the backticks surround the comment and the code is written as text in the GUI.

Also, copying is going to take a lot of time since higher up each line of code that appears as text has its own copy icon next to it. That is from cicking the very bottom right one which I thought would get them all but only got that.

Do you see this icon in the bottom right of chat? It should let you copy the whole thing.