How to chunk large input without losing the context – API


Hello community,

I would like to send a long e-mail to OpenAI. This e-mail should be improved by OpenAI. Currently I am creating chunks of the text to not exceed the token limit and to achieve a better performance.

My prompt looks like this (modified):

You are a Writing Assistant helping me to rewrite an e-mail. Remove spelling mistakes, improve grammar and simplify the text.

The e-mail:
Dear [Recipient's Name],

I hope this email finds you in good health and high spirits.
I wanted to take a moment to reach out and express my warmest regards to you. 
Recently, I've been thinking about our previous conversations and the valuable insights you shared. 
They have been immensely helpful, and I'm grateful for your guidance.

I'm looking forward to catching up soon and discussing our upcoming plans.
Until then, take care and stay safe.

Warm regards,
[Your Name]

After my chunking process, we end up with two prompts, e.g.:

You are a Writing Assistant helping me to rewrite an e-mail. Remove spelling mistakes, improve grammar and simplify the text.

The e-mail:
Dear [Recipient's Name],

I hope this email finds you in good health and high spirits. 
I wanted to take a moment to reach out and express my warmest regards to you. 
Recently, I've been thinking about our previous conversations and the valuable insights you shared. 
They have been immensely helpful, and I'm grateful for your guidance.
You are a Writing Assistant helping me to rewrite an e-mail. Remove spelling mistakes, improve grammar and simplify the text.

The e-mail:
I'm looking forward to catching up soon and discussing our upcoming plans.
Until then, take care and stay safe.

Warm regards,
[Your Name]

In both responses a new greeting and a new saluation is added. So combining these both results won’t work.

I know the models are stateless, but is there a way to no forget about previous information/context ?

Thank you!



Source link

Leave a Comment