Using OpenWebUI (gema 3.1b model) will not search web

I have tried and tried to go to administrative settings and then use web search to get current answers to questions and get an error, and it defaults back to answers from the model creation. I have put in a google api key, search engine id, and set my search to google. ( I think I did all this correctly) I don’t know what I’m lacking or what I’m doing wrong??? I have watched every YouTube video out there. Help!!

1 Like

There seems to be a failure mode specific to OpenWebUI + Gemma 3 1B.


You are probably not missing one tiny checkbox. The more likely explanation is that OpenWebUI web search is failing in the middle of its pipeline, and Gemma 3 1B makes that much more likely. OpenWebUI’s current docs split web search into multiple layers: provider setup, query generation or tool use, model capability flags, content fetching, and final answer generation. They also say small local models often struggle with Native mode tool use, while Google’s own Gemma docs recommend Gemma 3 12B or 27B for function-calling performance, not 1B. (Open WebUI)

What is probably happening in your case

When OpenWebUI cannot complete the search path cleanly, it often falls back to answering from the model’s built-in knowledge. That matches what you described. There are multiple similar OpenWebUI reports with symptoms like “No search query generated”, query-generation failures, or Google PSE errors that end with stale or generic answers instead of live results. (GitHub)

For Gemma 3 1B specifically, there are two different failure modes depending on how you configured OpenWebUI:

1. If you are using Default mode

OpenWebUI says the TASK_MODEL is used for tasks such as web search query generation. By default, that often ends up being the same model you are chatting with unless you explicitly set something stronger. So if you are chatting with Gemma 3 1B, OpenWebUI may also be asking that 1B model to generate the search query that drives the whole web search flow. That is a weak point. (Open WebUI)

2. If you are using Native mode

OpenWebUI injects search_web and fetch_url tools into the model, but its own docs warn that small local models under 30B often struggle with Native mode. The stated failure modes include malformed tool output, poor tool choices, and unreliable multi-step behavior. OpenWebUI explicitly says Default mode is usually more reliable for small local models. (Open WebUI)

That is why your setup can look correct in the admin panel and still behave like “search does not work.”

Why Google PSE may also be part of the problem

Your Google setup may still be wrong even if it looks right.

Google’s official docs say the Custom Search JSON API requires:

  • a configured Programmable Search Engine
  • an API key
  • the engine ID cx, which is found in the engine’s Basic section. (Google for Developers)

Google also says:

  • the API is closed to new customers
  • existing customers have until January 1, 2027 to transition
  • there are 100 free queries/day before paid usage. (Google for Developers)

So Google PSE is still usable for eligible existing customers, but in 2026 it is not the most forgiving or future-proof provider.

There are also direct OpenWebUI bug reports showing Google PSE returning 400 INVALID_ARGUMENT from the Google endpoint even when users thought they had configured it correctly. That is a real, documented failure pattern. (GitHub)

A common Google-specific mistake

The API key may be restricted for the wrong kind of client.

Google’s API key docs say restrictions can be set by:

OpenWebUI calls Google PSE from the backend, not from your browser tab. So a key restricted like a browser key can fail when OpenWebUI uses it server-side. That is one of the first Google-side things I would suspect.

Another common mistake: the engine itself is not set up the way you expect

Google’s help docs say:

  • you can still use a Programmable Search Engine configured to search the entire web
  • but no new creation is supported
  • and even then, results can differ from normal Google Search because PSE emphasizes your configured sites and may use only a subset of Google Web Search results. (Google Help)

So even a “working” Google PSE setup does not behave exactly like google.com.

Also, Google’s help says the “Search the entire web” setting is managed in the control panel under the engine’s settings, and once toggled off on an existing engine it cannot be toggled back on. (Google Help)

OpenWebUI setting traps that matter a lot

In current OpenWebUI builds, Native mode requires more than just “Web Search enabled.”

The docs say that in Native mode you need:

  1. global web search configured
  2. the model’s Web Search capability enabled
  3. Web Search enabled under the model’s Default Features or toggled on in the chat
  4. Native function calling enabled for that model. (Open WebUI)

If any of that is missing, the search_web and fetch_url tools are not injected, and the model answers from its own weights instead. (Open WebUI)

There is also a recent OpenWebUI issue reporting that web search behavior breaks depending on the interaction between Web Search and Builtin Tools. In that report, search worked only when Builtin Tools was enabled, and failed when Builtin Tools was off even if Web Search was on. (GitHub)

So yes, the current settings model is confusing enough that you can do a lot “right” and still not actually have working search.

One more hidden problem: PersistentConfig

OpenWebUI marks many of these settings as PersistentConfig, including:

  • TASK_MODEL
  • GOOGLE_PSE_API_KEY
  • GOOGLE_PSE_ENGINE_ID
  • web-search-related settings. (Open WebUI)

That means if you changed values in the UI before, the database-stored values can override what you later put in .env or Docker env vars. So restarting the container does not always mean your latest env values are the ones actually being used. (Open WebUI)

Another reason it can look broken even when search technically runs

OpenWebUI’s troubleshooting docs say web search can fail after the provider returns results. Common reasons are:

  • proxy-related fetch failures unless Trust Proxy Environment is enabled
  • empty fetched content
  • too-small context window
  • poor loader choice. (Open WebUI)

The docs specifically recommend increasing context length to 16384+ for web content because pages often contain 4,000–8,000+ tokens. Gemma 3 1B only has a 32K context window, while larger Gemma 3 sizes have 128K. (Open WebUI)

So even if search partly works, a 1B model can still end up acting stale because it does not use the fetched material well.

What I think is most likely, in order

Most likely

Gemma 3 1B is the wrong model for this job.
If you are in Default mode, it is probably weak at query generation. If you are in Native mode, it is probably weak at tool use. Both are documented risk areas. (Open WebUI)

Second most likely

You enabled web search globally but not fully at the model/chat level.
That includes Native mode, Web Search capability, Default Features, chat toggle, and possibly Builtin Tools interactions. (Open WebUI)

Third most likely

Your Google PSE config is incomplete or rejected server-side.
That could be the wrong cx, a browser-restricted API key, the wrong engine type, or the documented Google PSE INVALID_ARGUMENT issue seen in OpenWebUI. (Google for Developers)

Fourth

Search returns something, but fetching/extraction/context makes the final answer useless.
That usually shows up as stale or generic responses even though “search” seems to happen. (Open WebUI)

What to check first

Do these in this order.

1. Stop testing with Gemma 3 1B as the search brain

For diagnosis, switch to:

  • Default mode first, not Native mode
  • and a stronger model for search-related tasks if possible. OpenWebUI and Google’s own Gemma docs both point away from 1B for tool-heavy workflows. (Open WebUI)

2. Verify Google PSE outside OpenWebUI

Test the Google API directly. Google’s REST docs say each request needs key, cx, and q. (Google for Developers)

curl "https://www.googleapis.com/customsearch/v1?key=YOUR_API_KEY&cx=YOUR_ENGINE_ID&q=latest+OpenWebUI"

Interpretation:

  • JSON with items → Google side basically works
  • 400/403 → Google side is wrong or restricted
  • no useful results → engine setup may be wrong or not configured how you think. (Google for Developers)

3. Confirm your cx

Google’s official help says the Search Engine ID is in the engine’s Basic section. (Google Help)

4. Confirm whether the engine is really set to search the web

Google’s help says PSE can search the whole web, but it behaves differently from Google Search and can still emphasize your sites or use only a subset of the corpus. (Google Help)

5. Check the API key restriction type

If the key is restricted, it likely needs to be valid for server/backend use, not only browser referrers. (Google Cloud Documentation)

6. Re-check the OpenWebUI model settings

For your model, verify:

  • Web Search capability
  • Web Search in Default Features
  • Function Calling mode
  • chat-level Web Search toggle
  • Builtin Tools behavior if you are in Native mode. (Open WebUI)

7. Check logs

OpenWebUI’s troubleshooting page says to check logs, and recent issues show the exact kinds of messages to look for:

  • No search query generated
  • NoneType errors during search generation
  • Google INVALID_ARGUMENT
  • timeout or empty-content fetch errors. (Open WebUI)

8. If search succeeds but answers stay stale

Check:

  • context length set to 16384+
  • proxy trust
  • loader engine. (Open WebUI)

The blunt conclusion

Your setup is probably failing because Gemma 3 1B is too weak for OpenWebUI’s search orchestration path, and Google PSE is a brittle provider choice with several configuration traps. The result is exactly what you described: an error appears, then the model falls back to old built-in knowledge instead of giving a current answer. (Open WebUI)

The fastest way out is usually:

  1. test Google directly with curl
  2. switch OpenWebUI to Default mode
  3. stop using Gemma 3 1B for query/tool orchestration
  4. then verify model capability and chat toggles. (Google for Developers)

OMG… thank you for this indepth answer. The PC I am using is not capable of much more than that size of a model. I am a youtube certified old man, wishing for younger days. I am envious of all these younger people. My finances also dictate (for the near future) expanding to a better set up. I will eventually. I am learning sooooo much.

1 Like

I see. I’m an old guy too.:grinning_face:

When it comes to small models around 1B, besides Gemma 3, the Liquid LFM and Qwen 3/3.5 are also excellent options. The Liquid models seem to have a particularly good reputation because they’re fine-tuned specifically for edge devices.
https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct
https://huggingface.co/collections/LiquidAI/lfm25