Where Is Gemma 4? The Community Is Getting Impatient
Google hasn't said a word about Gemma 4, and the open-source AI community is growing restless. Prediction markets are open, Reddit is debating, and competitors aren't waiting.

How long are we supposed to wait? That question is now a recurring theme across r/LocalLLaMA, where a post titled "Gemma 4" pulled 475 upvotes and 113 comments in two days. Google has said nothing about the next generation of its open-weight model family, and the silence is starting to feel deliberate.
What's Going On
Gemma 3 landed in 2025 with four sizes — 1B, 4B, 12B, and 27B — plus Gemma 3n nano variants designed for on-device inference. The lineup was well-received, particularly the 27B instruct model, which punched above its weight on benchmarks and became a staple of the local LLM scene.
But the community expected a follow-up by now. A separate Reddit thread titled "HOT TAKE: GEMMA 4 IS PROBABLY DEAD" captures the growing pessimism. Manifold Markets has an active prediction market on the release date, with bettors clearly uncertain whether 2026 will see anything at all.
The frustration has a specific shape: the 70B gap. Gemma 3 tops out at 27B parameters. Users want something between that and frontier-class models — a 70B-range option that can compete with the dense models from Meta and Mistral on mid-tier hardware. The jump from 27B to the rumored 120B-class model Google is reportedly working on would skip the sweet spot entirely.
Why It Matters
Google appears to be channeling its resources elsewhere. The company's recent focus has been on Gemini 4 for its cloud products and on inference optimization through techniques like TurboQuant. Neither of those efforts produces a new open-weight model for the community to run locally.
Meanwhile, competitors are filling the vacuum. Qwen 3.5 now ships in 27B, 35B, 122B, and 397B variants — covering exactly the size range Gemma users are asking for. Alibaba's open-source push has made Qwen the default recommendation in many local AI communities. Meta's Llama and Mistral's models round out an increasingly crowded field where Google's open-weight presence is fading.
The risk for Google is straightforward: developer mindshare. Every month without a Gemma update is a month where fine-tuners, toolchain developers, and hobbyists build their workflows around something else. Switching costs in this ecosystem are low, but habits are sticky.
What's Next
Until Google breaks its silence, the community is left reading tea leaves. The optimistic case: Gemma 4 is coming and Google is waiting for a proper launch with competitive benchmarks. The pessimistic case: Gemma has been deprioritized in favor of proprietary Gemini development, and open-weight releases will slow to a trickle. Either way, the prediction markets are open, and the betting odds suggest nobody really knows.


