I have tested Gemini 2.5 Pro Experimental in my coding tasks, and I’ve been so impressed with it that I decided to give an IDE built around Gemini a try.
Upon checking AI configuration, I couldn’t find any way to tell if the built in AI model is Gemini 2.5 Pro Experimental / based on it, or it’s a previous Gemini model.
Gemini 2.5 Pro Experimental is a league on its own in my experience and I’m not considering using a previous model.
If I understand correctly I can use Gemini API using my key and manually specify a model by name, but I’m not sure if I need to do that or the built-in model is already 2.5 Pro.
@Dragora Thanks for reaching out, and we’re thrilled to hear you’ve been impressed with Gemini 2.5 Pro Experimental! We understand why you’d want to ensure you’re using that specific model within the IDE.
To clarify the AI configuration:
Built-in Model: The built-in model option utilizes a variant from the Gemini 2.x family. At any given time, depending on various factors, this could be Gemini 2.0 Flash, Gemini 2.0 Pro, or potentially even a version based on 2.5 Pro. However, it’s not guaranteed to be Gemini 2.5 Pro Experimental model.
Using Your Own API Key (Recommended for 2.5 Pro Experimental): You are correct! To guarantee you are using Gemini 2.5 Pro Experimental (or any specific Gemini model available via the API), you should use your own Gemini API key.
You can easily set this up in the IDE’s Settings by entering your API key and specifying the desired model name (e.g., gemini-2.5-pro-exp-03-25). Check out https://x.com/cpsloal/status/1905151825847910416.
Please be aware that this “Bring Your Own Key” (BYOK) functionality, especially for accessing experimental models like 2.5 Pro, is itself experimental within the IDE. Your experience may vary (“your mileage will vary”), and you might encounter different behaviors or occasional issues compared to the built-in option.
We highly value feedback on this experimental feature. If you choose to use your own key for Gemini 2.5 Pro Experimental and run into any bugs or unexpected behavior, please don’t hesitate to let us know. We are actively working on improving this integration and will address reported issues as quickly as possible.
I hope this clarifies the options available! Let us know if you have any further questions.
So to clarify, throughout a single chat, the model could potentially flip back and forth, or would it stick with a single model throughout a single chat?
Thats correct, users are able to switch between models in a single chat.
Please note that this feature is very early and experimental, so your milage might vary. Please let us know if you face issues or unexpected behaviors, and we will attempt to address them.
Correct, since this is BYOK (bring your own key), the API limits from AI Studio/API exists. If you have a paid tier of Gemini API, then the higher limits of a paid account would exist too.