I’ve searched my code for any Gemini usages with streamGenerateContent (which it’s not), opened a support ticket with Google, on a Blaze Plan, reset my workspace few times now, and used Gemini from “Code” view to try to help me solve this. Here is where I’m at today (this has been going on for a few days now):
1. Since a workspace reset didn’t fix the Prototyper’s behavior of using an old model and endpoint, this might be a bug or a persistent caching issue within the Firebase Studio Prototyper environment itself. You may need to report this through Firebase support channels if the Prototyper continues to ignore your project’s actual AI model configuration for its UI-driven AI interactions.
For now, the evidence points to the Prototyper UI having its own distinct behavior for AI calls that is not currently aligning with your project’s explicit configuration, and those Prototyper-initiated calls are the ones hitting the token limit with the old model.
I haven’t and still waiting on support. For some reason, it’s sending all my code to the model. I had some luck using an .aiexclude file at the root of my project, but the token error always comes back. My app is not using any AI models yet and when I do get the prototyper working briefly I ask it how many tokens my project is using and it says zero because my code isn’t referencing any models so i’m stumped. I’ll keep digging today for my project is halted mostly because of this.
So does it mean if we are building full scale apps we might be stuck? I’ve notice if you add a new branch in the codeview, the prototyper starts working again.
How many tokens are currently being used on this project?
Gemini
That’s a good question to keep in mind, especially when working with generative AI models!
Based on the current state of the project as we’ve discussed, and after we removed Genkit and any AI-related functionality, the project is not currently making any calls to generative AI models like Gemini.
Therefore, the token usage for generative AI services for this project should be zero.
If you’ve reintroduced any AI features or Genkit flows since our last interaction, please let me know which specific flow or AI call you’re interested in, and we can then look at how to estimate its potential token usage!
As for the branch, I’m going to give this a try and I’ll report back if this works for me or not.
Tambem estou enfrentando o mesmo problema. Solicitei ajuda pelo suporte. Parece que a chat está computando todo o contexto do papo como se fosse um prompt, incluindo as respostas e alterações no código que ela mesma produz no Prototyper. Isso soma com todas as redundâncias de mensagens, erros de codificação, solicitações de novos acertos, etc. A Gemini usa o chat inteiro como um contexto e soma todas as palavras como token. Pelo meu faturamento não percebi se haverá cobranças pelo excedente, pois o contexto que colocaram no chat foi de apenas 1048576 palavras.
Eu solicitei a inclusão de um recurso de gerencimento do chat, possibilitando deletar/editar mensagens.
Hopefully with more people getting really deep with using the Prototyper, like we are, more support cases may help elevate the severity and have a resolution very soon. I know I’m stuck right now except for using Code View and using the Built in Gemini Model to do some stuff.
Totally agree! I’m in the same boat for now, relying heavily on Code View and the built-in Gemini model to keep things moving. But Code View doesn’t have undo/restore like the Prototyper, right? That makes it a bit riskier to experiment freely. Hopefully, with enough traction, we’ll see improvements soon.
It’s quite frustrating that the solutions have not been provided yet, and there also doesn’t appear to be any option to download all the associated code. If there were a way to export, we could consider migrating to another coding assistant, such as GitHub Copilot in VS Code or similar tools.
I’m just waiting for the fix. in the meantime using another ai tool to continue developing the project but still using the firebase ecosystem to publish and host the app.