Did the prototyper become less performant?

Since gemini 3 pro preview was released, the prototyper has never been so clumsy. It fails to understand the demands, to understand the existing app, to avoid errors, and it killed a perfectly working app.

I have seen this both with the embedded model and the gemini 3 pro preview (my own API).

Anyone else noticed this ?

The prototype is messed up tbh. It invite errors that it can solve itself.

Now it snag on just a question without coding.

Their last update mess things up big time

How long is your chat history with the ai code assist?

You are right that the system seems to have become more unstable, this is due to its recent releases and updates.

You can always see the version history by clicking extensions and then click the gemini cli companion and it will open the details of the extension in your workspace with the details and version of the cli

You can also report issues to the repository here - google-gemini/gemini-cli: An open-source AI agent that brings the power of Gemini directly into your terminal.

That is also a good place to see known issues and the roadmap for the cli companion as well.

But long and the short of it is, try to keep your chat history minimal in the AI chats. Do not try to do many large changes in one chat, create a new one for each feature and even create a blueprint for the AI to follow so it doesn’t get lost if the chat does get long due to the amount of work on that feature. It happens, you just have to keep the AI under control.

I told my friends the AI is like working with the worlds smartest dumbest person in the world. It knows what to do, but you have to watch it carefully and hold it by the hand to get what you need done. If you let it do whatever without guidance, it would be like handing a child a lit candle in a barn full of wool and then leaving that child unsupervised… I am willing to bet that kid will burn down the barn. You have to watch it and guide it and be very clear and detailed with your prompts.

here are a few good links to help with prompts as well.

Thank you I didn’t know about this extension. Does it really empower the existing chat assistant? I installed it and see no difference. I also see no difference between using gemini 3 pro review (API) VS the default assistant.

that extension is what you should have had already installed and been using. The built in model is indeed Gemini 2.5 pro. Gemini 3.0 is a cost and requires your own API model.

I was showing you where to locate it so you can report issues and keep up with its development since it is in preview mode and by no means is it 100% or even close to it. You can also submit bug reports etc to help it improve.

I would stay away from using Gemini 3 for the time being, its not the performance of it but the cost can grow VERY fast whereas the built in model is part of using firebase and the trade offs are negligible for most app development.

That said, I also shared the prompting advice pages to help you with prompting. I don’t know what you are asking the code assist to do or how instructive you are being. You have to be very descriptive of what you are doing and be very detailed. Knowing your code and its basic ins and outs will help you explain to the bot better what you are trying to do.

I know for me personally it does run errand every now and then but not at the level you are stating.

Can you share some screen shots of the prompts you used and it started making mistakes??