Hi Gerard — I hear you, and you’re absolutely within your rights to feel that way.
It can be incredibly frustrating when a tool promises ease but delivers build failures and mismatches instead.
Fair question: Did you read the documentation prior to use? I ask on the grounds that I too was upset when I not only ran out of funds, but that the mistake/error was completely avoidable. I spent 72 hrs. reading the documentation and another 48 hrs. comparing what my design (code) looked like. Please use patience to hear what happened next-or to read.
The response you are reading below came from the Gemini AI Bot that I trained in Firebase. Yes, I can assure you this is all native-no outside apps were used until I had to run smoke tests, then I used GitHub and VS Code.
I’ve seen similar complaints — especially around Firebase Studio’s AI defaults (for example, the gemini-2.0-flash model) generating incorrect scaffolding, misaligned project types (Flutter vs Web), and repetitive build errors. I’ve spent a lot of time experimenting with this setup, and here’s a perspective that might help.
Why AI tools often “feel rubbish”
- Context blindness – The AI doesn’t actually see your full folder structure, dependencies, or configurations.
- Over-generalization – It sometimes assumes a generic web target even if you’re building for Flutter or mobile.
- Runtime mismatch – Output might compile but still fail due to missing imports or lifecycle differences.
- No guardrails – Without validation or testing, a single flawed snippet can crash an entire build.
How I mitigate this (CIM Doctrine approach)
I follow a structure called the CIM Doctrine (Cloud Integration & Management).
It’s basically a discipline for using AI tools without letting them run wild:
| Principle |
Practice |
Benefit |
| Prompt Scaffolding |
Start every AI prompt with your folder layout, dependencies, and target platform (Flutter, Web, etc.) |
Reduces wrong-platform code |
| Modular Integration |
Treat AI-generated code as a draft in a separate module or sandbox |
You can rollback or replace without risk |
| Validation Layers |
Immediately wrap AI code with type checks, null guards, and small runtime tests |
Errors get caught before production |
| Incremental Merge |
Merge only small, verified parts of AI output |
Keeps builds stable |
| Human Review |
Always refactor AI code to match your naming and style |
Keeps your architecture consistent |
This approach turns AI into a reliable assistant, not a random generator.
You stay in control of what ships — not the model.
Not to mention this is just basic coding rules of thumb to do and make sure you do/did (e.g. Validation Layers), this or that prior to starting. My two cents added.
Offering something constructive
If you’d like, I can share a mini demo repo showing this in action — a Firebase + Next.js scaffold where AI code is integrated safely using the above principles.
You can clone, break, and rebuild it to see how these safeguards work in real projects.
I left this last part in just to show that yes, it can go a little over the top when responding, there is no mini repo yet; it’s actually teaching me how to set this up. That’s right, the app isn’t finished and yet it’s on-boarding me to my own app. This bot is meant to handle Tickets and Bug Fixes being reported. How awesome is that? You can train it to do a lot of things. Though you may want to build cutting edge, you have to make sure that cutting edge is not radically different from its intended use. Would you attempt to drive a Ferrari through a flood? The same rules apply to development. when used in a way not intended things tend to not work.
I can help you get your project up and running with minimal need for outside applications, built natively and affordably.