Error: Sorry, I hit a snag. Please try again shortly or modify your prompt

I can’t use the Gemini prototyper, anything I give it this is a consistent reponse I get…

[GoogleGenerativeAI Error]: Error fetching from https://monospace-pa.googleapis.com/v1/models/gemini-2.5-pro:streamGenerateContent?alt=sse: [400 Bad Request] The input token count (1049725) exceeds the maximum number of tokens allowed (1048576).

3 Likes

This error means you’re sending too much text (1,049,725 tokens) to the Gemini 2.5 Pro model, which has a hard limit of 1,048,576 tokens per request. Tokens are chunks of text (words, punctuation, or subwords) that the AI processes. TO FIX IT, shorten your input — remove unnecessary details, split long prompts into smaller parts, or summarize content — so it stays under the token limit.

1 Like

Even one word prompt shows this error. The prototyper never execute anything.

Have a look at the file(s) you’re working on and their sizes (lines and char count) maybe see/ask if a refactor of the file is in order. I had a js file that was getting pretty big and single line prompts against that file bogged it down and I would get errors and even no responses.
I used straight Gemini outside of FBS and had it refactor my single js file into many smaller, modular files, and link everything back together. I copy pasted into FBS, and used the prompt that Gemini made for me to tell FBS that I had refactored my initial file and their setup.

Smooth sailing for me since then. YMMV

Hi @Banda I’ve just posting something that might help here. Please give it a try and let me know if it helps.

1 Like

This didn’t work, it had typescript warnings. The idx environment Gemini is okay but the issue is the prototyper environment Gemini is not executing any prompt given.

In an extreme case will looking for online solution, i ended up with a method that refreshed the studio ai memories forcing it to prototype a production grade app, this is very disappointing. Was forced to loose months of work and logics.

After adding the solution above. Refreshed and started the VM the gave the prompt below to see if it’s working..

“Internal Diagnostic: List the top 5 most complex files in my project by line count. For the longest file, summarize the logic between lines 400 and 550. If you can see this, confirm your current maxOutputTokens setting."

Gemini prototyper response

“Sorry, I hit a snag. Please try again shortly or modify your prompt.

[GoogleGenerativeAI Error]: Error fetching from https://monospace-pa.googleapis.com/v1/models/gemini-2.5-pro:streamGenerateContent?alt=sse: [400 Bad Request] The input token count (1187987) exceeds the maximum number of tokens allowed (1048576)”.

Check out any alternative solutions…

When you say “This didn’t work, it had typescript warnings.” do you mean the settings.json file itself was complaining about the Typescript warnings? It’s possible you were missing a comma or some other small syntax error which would have prevented the setting from being applied, and therefore not work.

Take a closer look at the settings.json file for accuracy. If the file already existed you need to merge in the new setting. For example…

{
    "editor.renderWhitespace": "all",
    "editor.fontSize": 14,
    "IDX.aI.actionsInChatMaxOutputTokens": 32000
}

In the above example, there is a comma after the first two, but not last setting. If you added this new setting, perhaps you missed adding the comma to the previously last setting which wouldn’t have been there before?

Or, perhaps you added the setting this way? This won’t work, and will also give a syntax error.

// This is invalid, don't use
{
    "editor.renderWhitespace": "all",
    "editor.fontSize": 14
},
{
  "IDX.aI.actionsInChatMaxOutputTokens": 32000
}

Please let me know if the Typescript error you mentioned was related to the settings.json file itself or something else.

3 Likes

Assuming, You wrote code yourself or cloned any code file (s) which is more then gemini file tokens maximum size without help of prototype. That’s way gemini is considering that files as your prompt and getting such error.

1 Like