Merge pull request #960 from Munsio/avante-new-provider
Some checks failed
Set up binary cache / cachix (default) (push) Has been cancelled
Set up binary cache / cachix (maximal) (push) Has been cancelled
Set up binary cache / cachix (nix) (push) Has been cancelled
Treewide Checks / Validate flake (push) Has been cancelled
Treewide Checks / Check formatting (push) Has been cancelled
Treewide Checks / Check source tree for typos (push) Has been cancelled
Treewide Checks / Validate documentation builds (push) Has been cancelled
Treewide Checks / Validate hyperlinks in documentation sources (push) Has been cancelled
Treewide Checks / Validate Editorconfig conformance (push) Has been cancelled
Build and deploy documentation / Check latest commit (push) Has been cancelled
Build and deploy documentation / publish (push) Has been cancelled

avante-nvim: Migrate provider options
This commit is contained in:
raf 2025-06-18 12:41:21 +03:00 committed by GitHub
commit f661c388ee
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -13,28 +13,44 @@ in {
description = "The provider used in Aider mode or in the planning phase of Cursor Planning Mode."; description = "The provider used in Aider mode or in the planning phase of Cursor Planning Mode.";
}; };
vendors = mkOption { providers = mkOption {
type = nullOr attrs; type = nullOr attrs;
default = null; default = null;
description = "Define Your Custom providers."; description = "Define settings for builtin and custom providers.";
example = literalMD '' example = literalMD ''
```nix ```nix
ollama = { openai = {
__inherited_from = "openai"; endpoint = "https://api.openai.com/v1";
api_key_name = ""; model = "gpt-4o"; # your desired model (or use gpt-4o, etc.)
endpoint = "http://127.0.0.1:11434/v1"; timeout = 30000; # Timeout in milliseconds, increase this for reasoning models
model = "qwen2.5u-coder:7b"; extra_request_body = {
max_tokens = 4096; temperature = 0;
disable_tools = true; max_completion_tokens = 8192; # Increase this to include reasoning tokens (for reasoning models)
}; reasoning_effort = "medium"; # low|medium|high, only used for reasoning models
ollama_ds = { };
__inherited_from = "openai"; };
api_key_name = ""; ollama = {
endpoint = "http://127.0.0.1:11434/v1"; endpoint = "http://127.0.0.1:11434";
model = "deepseek-r1u:7b"; timeout = 30000; # Timeout in milliseconds
max_tokens = 4096; extra_request_body = {
disable_tools = true; options = {
}; temperature = 0.75;
num_ctx = 20480;
keep_alive = "5m";
};
};
};
groq = {
__inherited_from = "openai";
api_key_name = "GROQ_API_KEY";
endpoint = "https://api.groq.com/openai/v1/";
model = "llama-3.3-70b-versatile";
disable_tools = true;
extra_request_body = {
temperature = 1;
max_tokens = 32768; # remember to increase this value, otherwise it will stop generating halfway
};
};
``` ```
''; '';
}; };