no idea but for now:
This error is not “mysterious.” It is a strict format check.
Your app (Google AI Edge Gallery → Σ Mobile Actions) is trying to initialize a LiteRT-LM text-generation engine. That engine must know which tensor in the model output is the logits tensor (the per-token vocabulary scores used to pick the next token). Your model bundle does not expose that output in the way LiteRT-LM expects, so engine creation aborts:
model_signatures.output_logits.empty()Output logits not foundFAILED_PRECONDITION
What “logits not found” means in this stack
Background
For autoregressive LLM decoding, the runtime repeats a loop:
- Feed token ids (and KV-cache state).
- Run the model.
- Read logits output.
- Sample the next token.
- Repeat.
If step 3 is impossible (no logits output in the model signatures), decoding cannot begin, so the runtime fails immediately. That is exactly what your message indicates.
The most likely causes in your specific FunctionGemma Mobile Actions setup
Cause A (most common): you loaded the wrong file, or the wrong artifact type
The Mobile Actions guide is explicit: after fine-tuning, you convert and quantize to .litertlm, then in AI Edge Gallery Mobile Actions you choose Load Model and pick that .litertlm. (Google AI for Developers)
The Gallery wiki also frames “local model import” around .litertlm files. (GitHub)
So if you selected something that is not the final .litertlm (examples: a .tflite, .task, metadata file, tokenizer file, partial download), LiteRT-LM may load “something,” but it will not find the required output_logits signature.
Cause B: your .litertlm exists, but the metadata does not map “output logits” correctly
A .litertlm is not just a model. It is TFLite model + metadata that tells LiteRT-LM which input is tokens and which output is logits.
A concrete example from a Hugging Face model card shows the builder TOML explicitly sets:
[model.start_tokens] model_input_name = "input_ids"[model.output_logits] model_output_name = "Identity"
and notes that .litertlm “bundles the TFLite model with metadata required by the LiteRT-LM runtime.” (Hugging Face)
If your bundle’s model_output_name is missing or wrong (the logits tensor might not be named "Identity" in your exported .tflite), the runtime sees “no output logits” and throws your exact error.
Cause C: you used the MediaPipe “.task” conversion pipeline, then tried to run it in the Mobile Actions LiteRT-LM path
Google has two related packaging targets:
- MediaPipe LLM Inference API uses a Task Bundle (
.task) that packages model + tokenizer + metadata. (Google AI for Developers) - LiteRT-LM / Gallery Mobile Actions is commonly driven by
.litertlmbundles. (Google AI for Developers)
The LLM Inference docs explicitly discuss using pre-converted .litertlm models, and also building .task/.litertlm bundles from .tflite + tokenizer. (Google AI for Developers)
If you built a .task and pointed the Mobile Actions “Load Model” flow at it (or you built a .litertlm meant for a different runtime path), signature expectations can mismatch and you land in “logits output not found.”
What I would do next on your device (high-signal, low-effort)
1) Verify you are selecting the final .litertlm
- In Mobile Actions, “Load Model” should be picking the
.litertlmproduced by the official notebook flow. (Google AI for Developers) - If you are importing locally, Gallery’s wiki shows pushing exactly
model.litertlminto/sdcard/Download/and selecting that file. (GitHub)
If the filename you selected does not end in .litertlm, assume this is the whole problem.
2) Sanity-check with a known-good .litertlm model
The LLM Inference docs point to “LiteRT Community” pre-converted models on Hugging Face and state some are available as .litertlm and ready to use. (Google AI for Developers)
If a known-good .litertlm runs, your app and phone are fine. Your exported bundle is wrong.
3) If you built the .litertlm yourself, rebuild with a correct output_logits mapping
Use the same pattern as the TOML example:
- ensure
[model.output_logits]exists - set
model_output_nameto the real output tensor name in your.tflite(often but not always"Identity") (Hugging Face)
If the tensor name is wrong, LiteRT-LM will not “guess” it.
Practical pitfalls that produce exactly your error
- Choosing the wrong file from Drive (metadata sidecar or intermediate artifact instead of the final
.litertlm). (Google AI for Developers) - Using
.tflitedirectly without bundling metadata. LiteRT-LM expects.litertlmas a bundle format. (GitHub) - Wrong logits tensor name in the bundle metadata (
output_logitspoints to a tensor name that does not exist in the exported.tflite). (Hugging Face) - Mixing
.taskand.litertlmpipelines across different app features..taskis a MediaPipe Task Bundle concept. (Google AI for Developers)
Good references (worth bookmarking)
- FunctionGemma Mobile Actions end-to-end guide (fine-tune → convert to
.litertlm→ load in Gallery) (Google AI for Developers) - Gallery wiki: importing local
.litertlmfiles (ADB push path and flow) (GitHub) - LiteRT-LM overview repo (what LiteRT-LM is in the stack) (GitHub)
- LLM Inference “Models + conversion” overview (LiteRT Community models,
.tflite→.task/.litertlm) (Google AI for Developers) - Gemma conversion guide: Hugging Face safetensors → MediaPipe Task (
.taskbundles model + tokenizer + metadata) (Google AI for Developers) - Example
.litertlmbuilder TOML showing explicit[model.output_logits]mapping (Hugging Face)
If you want a precise diagnosis in one shot
Tell me the exact filename and extension you selected in Mobile Actions (for example something.litertlm vs something.task vs something.tflite) and whether it came from the official Mobile Actions notebook or a custom conversion. With that, the root cause is usually determinable immediately.
Summary
- Your runtime cannot find the model’s logits output signature, so it refuses to create the generation engine.
- Most likely: you loaded the wrong artifact, or your
.litertlmmetadata does not mapoutput_logitsto the real TFLite output tensor. - Fix path: load the correct final
.litertlm, or rebuild the bundle so[model.output_logits]points to the correct output tensor name.
