I have a snag error in the prototyper : (previous one was about the size, and the solution was described elsewhere)
now I have:
[GoogleGenerativeAI Error]: Error fetching from https://monospace-pa.googleapis.com/v1/models/gemini-2.5-pro-preview-03-25:streamGenerateContent?alt=sse: [400 Bad Request] Request contains an invalid argument.
I tried several things (including deleting the threat) , reset , /Clear etcetc
nothing helps…
Any clue ? because the protyper is useless now and I cannot continue my project ?
thanks
Hi @Edwin_Dingjan - this is a known issue. We are looking into this currently. Thanks for bearing with us as we work through this.
Hi Kirupa, any progress? tested it this morning, still the same error. Can I do something ?
same issue here. I did the solutions proposed here, on this forum, but it didn’t work.
My project stopped now, because is doesn’t do anything. Who can help?
me neither…almost 1 week with stopped project.
Me too. My project has been on hold for almost a week.
Hi Kirupa
Any news? As i am only used the prototyper my whole project stopped now? What can i do?
Could you/do you want to use my workspace for debugging the error?
Same problem here. Any news?
Same problem. Project stuck now for a week. No updates from support.
Seems like Preview only has a limited number of AI tokens that do not reset. Once they’re exhausted you cant use the prototyper ai. Not sure if its per projects or in total. Anyone confirm?
Heads-up: After pruning all oversized context files and attempting to restart the workspace, I now see
“Setting up workspace. Whoops… we are experiencing increased load and are spinning up a new VM for you. This may take several minutes. Check back soon.”
>
So the token-limit issue is addressed, but Firebase Studio currently can’t allocate a VM. I’m still working on it; if I manage to bring the workspace back online, I’ll report the details in the forum. 
What has been done so far
- Initial error
The Prototyper kept throwing
400 Bad Request – input token count (≈2 221 755) exceeds max (1 048 575).
That meant the combined size of every JSON context file in ~/.idx/ai/ was larger than the 1-MiB window of the default model (gemini-2.5-pro-preview).
- Size audit
Using du -b ~/.idx/ai/*.json ~/.idx/ai/threads/*.json, I found one thread file at 1.7 MB—big enough to exceed the limit by itself.
- Pruning the culprit
- Backed it up to
~/studio/context_backup/.
- Used
jq 'walk(if type=="array" and length>30 then .[-30:] else . end)' to keep only the last 30–50 messages.
- Moved any
.bak files out of ~/.idx/ai/threads/.
After pruning, every JSON file was under 120 kB.
- Verification
Re-running the du -b … | sort -nr check confirmed no file exceeded ~120 kB, comfortably beneath the 1-MiB cap.
- Restart attempt
Closing/re-opening the Prototyper triggered a full workspace restart, and that’s when the high-load VM message appeared, blocking immediate access.
- Fallback/next steps
- If the VM starts, I’ll test the Prototyper with a short prompt (
ping).
- If the issue persists, I can use ⋮ → Reset workspace to wipe all hidden context files in one shot.
- Long-term, switching the blueprint to gemini-1.5-pro (2 M-token context) would provide more headroom—once Firebase Studio exposes that model.
I’ll keep trying to regain access and will post an update once I have a definitive fix.
the snag error persists…=/
Grrr me too , allthough the error message changed.