Firebase Studio Token Exceeding in Prototyper

Is anyone else experiencing any error messages like the following within the Prototyper:

[GoogleGenerativeAI Error]: Error fetching from https://monospace-pa.googleapis.com/v1/models/gemini-2.5-pro-preview-03-25:streamGenerateContent?alt=sse: [400 Bad Request] The input token count (1049140) exceeds the maximum number of tokens allowed (1048575).

I’ve searched my code for any Gemini usages with streamGenerateContent (which it’s not), opened a support ticket with Google, on a Blaze Plan, reset my workspace few times now, and used Gemini from “Code” view to try to help me solve this. Here is where I’m at today (this has been going on for a few days now):

1. Since a workspace reset didn’t fix the Prototyper’s behavior of using an old model and endpoint, this might be a bug or a persistent caching issue within the Firebase Studio Prototyper environment itself. You may need to report this through Firebase support channels if the Prototyper continues to ignore your project’s actual AI model configuration for its UI-driven AI interactions.

For now, the evidence points to the Prototyper UI having its own distinct behavior for AI calls that is not currently aligning with your project’s explicit configuration, and those Prototyper-initiated calls are the ones hitting the token limit with the old model.

3 Likes

Did you solve yours? I have the same error today too

[GoogleGenerativeAI Error]: Error fetching from https://monospace-pa.googleapis.com/v1/models/gemini-2.5-pro-preview-03-25:streamGenerateContent?alt=sse: [400 Bad Request] The input token count (1049462) exceeds the maximum number of tokens allowed (1048576).

1 Like

I haven’t and still waiting on support. For some reason, it’s sending all my code to the model. I had some luck using an .aiexclude file at the root of my project, but the token error always comes back. My app is not using any AI models yet and when I do get the prototyper working briefly I ask it how many tokens my project is using and it says zero because my code isn’t referencing any models so i’m stumped. I’ll keep digging today for my project is halted mostly because of this.

2 Likes

Found this about long context:

Also, I’m now seeing “Gemini API” charges starting to tick up on my cloud billing.

1 Like

So does it mean if we are building full scale apps we might be stuck? I’ve notice if you add a new branch in the codeview, the prototyper starts working again.

1 Like

@Victoren_Studios - I sure hope not. When I got my prototyper working briefly I asked it the following:

How many tokens are currently being used on this project?

Gemini

That’s a good question to keep in mind, especially when working with generative AI models!

Based on the current state of the project as we’ve discussed, and after we removed Genkit and any AI-related functionality, the project is not currently making any calls to generative AI models like Gemini.

Therefore, the token usage for generative AI services for this project should be zero.

If you’ve reintroduced any AI features or Genkit flows since our last interaction, please let me know which specific flow or AI call you’re interested in, and we can then look at how to estimate its potential token usage!


As for the branch, I’m going to give this a try and I’ll report back if this works for me or not.

2 Likes

Tambem estou enfrentando o mesmo problema. Solicitei ajuda pelo suporte. Parece que a chat está computando todo o contexto do papo como se fosse um prompt, incluindo as respostas e alterações no código que ela mesma produz no Prototyper. Isso soma com todas as redundâncias de mensagens, erros de codificação, solicitações de novos acertos, etc. A Gemini usa o chat inteiro como um contexto e soma todas as palavras como token. Pelo meu faturamento não percebi se haverá cobranças pelo excedente, pois o contexto que colocaram no chat foi de apenas 1048576 palavras.
Eu solicitei a inclusão de um recurso de gerencimento do chat, possibilitando deletar/editar mensagens.

1 Like

Interesting and a good observation to note. I tried to switch to a new branch but still facing the same token error.

It was working for me just now.. Now it isnt.. Guess we’ll just have to wait for the team to fix or give us an answer.

1 Like

Hopefully with more people getting really deep with using the Prototyper, like we are, more support cases may help elevate the severity and have a resolution very soon. I know I’m stuck right now except for using Code View and using the Built in Gemini Model to do some stuff.

2 Likes

Totally agree! I’m in the same boat for now, relying heavily on Code View and the built-in Gemini model to keep things moving. But Code View doesn’t have undo/restore like the Prototyper, right? That makes it a bit riskier to experiment freely. Hopefully, with enough traction, we’ll see improvements soon.

2 Likes

I’d say can backup the project OR use branching if connected to an external repo like with GitHub so you can revert if needed.

2 Likes

My ticket with Google Support got raised to Engineering Support. Not sure how long it will take them but I’m hoping very soon.

2 Likes

This is something we are looking into. Thanks for your patience as we dig into this further.

Just to help me track from a UX point of view, can you please share a screenshot of what this error looks like inside the prototyper?

3 Likes

@Kirupa - Here is a screenshot:


I haven’t been able to get by it. When I got it to work briefly, I did ask it how many tokens my project was leveraging and go this response:

Did you solve yours? I have the same error for 2 days. Still not solving. I also did reset my workspace but not solved.

1 Like

the same thing happens to me

2 Likes

Haven’t yet. Been fighting this for 4 days now. :slightly_frowning_face:

1 Like

It’s quite frustrating that the solutions have not been provided yet, and there also doesn’t appear to be any option to download all the associated code. If there were a way to export, we could consider migrating to another coding assistant, such as GitHub Copilot in VS Code or similar tools.

I’m just waiting for the fix. in the meantime using another ai tool to continue developing the project but still using the firebase ecosystem to publish and host the app.