Some details, screenshots, or options shown on this page may change before the official release.
During embeddings generation, the plugin displays log messages to help you understand the current status of the process.
These messages can refer to manual generation, background processing, batch progress, partial indexing, errors, or cleanup operations.
Some messages are especially useful when using Ollama, because they can indicate whether the selected model, endpoint, or runtime profile is suitable for the current machine.
Some log messages include placeholders such as {total}, {processed} or {reason}. These values are replaced at runtime with real data.
A full reference is available at the end of this page.
Common generation messages
| Message | Meaning |
|---|---|
Starting foreground generation. |
Manual embeddings generation has started in the foreground. |
Estimated target: {total} posts to process. |
The plugin has estimated how many posts will be processed during this generation. |
Batch: +{processed}/{batch} posts{error_suffix}. |
A batch has been completed. The message shows how many posts were successfully processed out of the attempted posts in that batch. If errors occurred, an additional suffix may be displayed. |
Batch: +{processed}/{batch} posts - #{id} {title} |
A batch has been completed. The message may also include the processed post ID and title. |
Posts with error: {sample}{more_suffix} |
Some posts in the batch could not be processed. The message shows one or more post IDs. If more errors are available, an additional suffix may be shown. |
Completed: {total} posts processed. |
The foreground generation process has been completed. |
Generation interrupted: {message} |
The generation process was interrupted. The message provides the reason when available. |
Error during batch generation. |
A generic error occurred while processing a batch. |
Network error during batch generation. |
The batch request failed, usually due to a timeout, network issue, or invalid server response. When possible, the plugin verifies the server-side progress before marking the generation as failed. |
Connection interrupted. Verifying server-side progress before marking as failed. |
The browser request was interrupted, but the plugin is checking whether the server continued processing and saved the embedding. |
Connection interrupted, progress detected. Resuming from latest state. |
The plugin detected that one or more embeddings were generated despite the interrupted request, then continued the batch from the updated state. |
Connection interrupted, but the batch was completed successfully. |
The request was interrupted, but the plugin later confirmed that the target batch was completed and saved correctly. |
When an interrupted request is recovered, the generated embedding can be identical to a run completed without warnings. In that case, the warning refers to the browser/server connection during the batch request, not necessarily to the quality or completeness of the saved embedding.
Partial indexing warnings
In some cases, a post may be indexed only partially.
This can happen when the selected model is too heavy for the current machine, when the configured runtime limits are too strict, or when one of the chunks cannot be processed correctly.
| Message | Meaning |
|---|---|
Warning: {n} posts were indexed only partially. The selected model may be too heavy for current machine/runtime limits. |
One or more posts were not fully processed. The selected model may require more resources than the current machine or runtime limits allow. |
Post {id}: processed {used}/{total} chunks (model: {model}). Cause: {reason} |
The specified post was only partially processed. The message shows how many chunks were completed, the total number of chunks, the model used, and the cause. |
Additional partial posts not shown: {count}. |
There are additional partially indexed posts that are not listed individually in the log. |
Ollama-specific causes
When using Ollama, partial indexing messages may include one of the following causes.
| Cause | Meaning | Recommended action |
|---|---|---|
chunk cap reached (max chunks: N) |
The post generated more chunks than the configured maximum allowed for a single post. | Increase Max chunks per post, increase Chunk size, or use a runtime profile with higher limits. |
runtime budget exceeded |
The maximum processing time allowed for the post was exceeded. | Increase Runtime budget per post or use a lighter model/runtime profile. |
timeout on chunk X/Y |
A request for a specific chunk exceeded the configured timeout. | Increase Request timeout, use a lighter model, or select a more conservative runtime profile. |
HTTP <status> on chunk X/Y |
Ollama returned an unexpected HTTP response while processing a chunk. | Check the Ollama endpoint, selected model, and whether the Ollama service is reachable. |
invalid embedding payload on chunk X/Y |
The response returned by Ollama did not contain a valid embedding for the specified chunk. | Check the selected model and try a different endpoint mode: Auto, Legacy, or Modern. |
Background generation messages
| Message | Meaning |
|---|---|
Background started: target {total} posts. |
Background embeddings generation has started with an initial target of posts to process. |
Background completed: {processed}/{total} posts processed. |
The background process has completed. The message shows how many posts were processed out of the total target. |
Background processing stopped. |
The background process has been stopped. |
Background start blocked: {message} |
The background process could not start because a requirement or configuration check failed. |
No posts to process in background. |
There are no posts available for background processing. |
Error starting background process. |
A generic error occurred while starting the background process. |
Cleanup messages
| Message | Meaning |
|---|---|
Embeddings cleared. |
All stored embeddings have been successfully cleared. |
Error during embeddings cleanup. |
An error occurred while clearing the embeddings table. |
Quick troubleshooting
| Symptom | Possible cause | Recommended action |
|---|---|---|
| Some posts are indexed only partially | The selected model may be too heavy, or the runtime limits may be too strict. | Use a lighter runtime profile, reduce the workload, or adjust the Custom limits. |
runtime budget exceeded |
The post took longer than the configured runtime budget. | Increase Runtime budget per post or use a lighter model. |
timeout on chunk X/Y |
Ollama did not respond within the configured request timeout. | Increase Request timeout or use a more conservative runtime profile. |
chunk cap reached |
The post generated more chunks than allowed. | Increase Max chunks per post or increase Chunk size. |
HTTP <status> on chunk X/Y |
The Ollama API returned an unexpected HTTP response. | Check that Ollama is running, the endpoint is correct, and the selected model is available. |
invalid embedding payload |
The response from Ollama did not contain a valid embedding. | Check the selected model and try changing the Ollama endpoint mode. |
Network error during batch generation. (HTTP 500 ...) but the embedding exists in the database |
The web request was interrupted after or while the server was completing the embedding generation. This can happen on slower machines or when the Ollama runtime is under load. | Check whether the log shows a recovery message. If the embedding metadata shows coverage_complete: true, the embedding was saved completely. Reduce machine load, use a lighter profile, or check server/PHP error logs if the error persists. |
| Results look inconsistent after changing model | The new model or tag may generate embeddings with a different size. | Regenerate the embeddings. |
Placeholders used in log messages
Some log messages include placeholders. These placeholders are replaced at runtime with real values generated by the plugin.
| Placeholder | Meaning |
|---|---|
{total} |
Total number of posts involved in the current generation process. |
{processed} |
Number of posts successfully processed. |
{batch} |
Number of posts included in the current batch. |
{error_suffix} |
Additional text shown when one or more errors occur during a batch. |
{sample} |
Sample list of post IDs that could not be processed. |
{more_suffix} |
Additional text shown when there are more errors than the ones displayed in the log. |
{id} |
ID of the post referenced in the log message. |
{title} |
Title of the post referenced in the log message. |
{message} |
Additional message explaining why a process was interrupted or blocked. |
{n} |
Number of posts referenced by the log message. |
{count} |
Number of additional items not shown individually in the log. |
{used} |
Number of chunks successfully processed for a post. |
{model} |
Embeddings model used during generation. |
{reason} |
Reason why a post was only partially processed or why the process could not continue. |
N |
Maximum value shown in a specific cause, for example the configured max chunks limit. |
X/Y |
Current chunk number and total chunks for the post being processed. |
<status> |
HTTP status code returned by Ollama during a failed request. |