nix | ||
src | ||
.envrc | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
flake.lock | ||
flake.nix | ||
LICENSE | ||
README.md |
nff
This is a high performance configuration parser and written in Rust. The goal is to receive possibly jumbled up nftables rule files, and output ✨ pretty ✨ human readable output in return. The main emphasis is on the syntax-aware formatting with comprehensive grammar support. It is a future goal to allow editors to hook into nff in order to format your rulesets directly from your editor, or as a diagnostics source.
Features
nff is in its early stages of development. While most of the syntax is supported, I cannot guarantee that everything is supported just yet.
Core Functionality
- Syntax-aware formatting - Deep understanding of nftables grammar with semantic preservation
- Multi-family support - Handles
inet
,ip
,ip6
,arp
,bridge
, andnetdev
table families - Flexible indentation - Configurable tabs/spaces with custom depth
- CIDR notation - Proper handling of network addresses (
192.168.1.0/24
) - Chain properties - Hooks, priorities (including negative), policies, device bindings
Advanced Features
- CST - Lossless representation preserving all tokens
- Debug mode - Comprehensive inspection of lexer tokens, AST, and CST
- Validation - Syntax checking with precise error locations
- Optimization - Configurable empty line reduction and whitespace control
Diagnostics & Analysis
- Comprehensive diagnostics - Syntax, semantic, style, and best practice analysis
- Modular analysis - Run specific diagnostic modules (
lexical
,syntax
,style
,semantic
) - LSP-compatible output - JSON format for editor integration
- Human-readable reports - Detailed error messages with context and location information
- Configurable severity - Control which diagnostic categories to enable/disable
Usage
Formatting
# Format a specific file (in place)
nff format /etc/nftables.conf
# Format all .nft files in current directory (in place)
nff format
# Custom indentation (4 spaces)
nff format config.nft --indent spaces --spaces 4
# Optimize formatting (reduce empty lines)
nff format config.nft --optimize
# Output to stdout instead of modifying files
nff format config.nft --stdout
# Syntax validation only
nff format config.nft --check
# Debug output for development (or debugging)
nff format config.nft --debug
Linting and Diagnostics
# Run comprehensive diagnostics on a file
nff lint /etc/nftables.conf
# Lint all .nft files in current directory
nff lint
# JSON output for editor integration
nff lint config.nft --json
# Run specific diagnostic modules
nff lint config.nft --modules syntax,style
# Available modules: lexical, syntax, style, semantic
nff lint config.nft --modules semantic
# Configure diagnostic settings (note: flags are enabled by default)
nff lint config.nft --style-warnings=false --best-practices=false
# Debug output with diagnostics
nff lint config.nft --debug
Parsing and CST Inspection
# Parse and display CST structure for debugging
nff parse /etc/nftables.conf
# Show tree structure with indentation
nff parse config.nft --tree
# Show detailed node information
nff parse config.nft --verbose
# Combined tree and verbose output
nff parse config.nft --tree --verbose
# Debug output with tokens and CST validation
nff parse config.nft --debug
Architecture
Processing Pipeline
nff implements a multi-stage pipeline:
graph TD
Input --> Lexer
Lexer --> Tokens
Lexer --> Parser
Tokens --> Parser
Parser --> CST
Parser --> AST
AST --> Formatter
Formatter --> Output
CST --> Formatter
Input --> Diagnostics[Diagnostic System]
Diagnostics --> LexAnalyzer[Lexical Analyzer]
Diagnostics --> SyntaxAnalyzer[Syntax Analyzer]
Diagnostics --> StyleAnalyzer[Style Analyzer]
Diagnostics --> SemanticAnalyzer[Semantic Analyzer]
LexAnalyzer --> DiagOutput[JSON/Human Output]
SyntaxAnalyzer --> DiagOutput
StyleAnalyzer --> DiagOutput
SemanticAnalyzer --> DiagOutput
Installation
Recommended way of installing nff is to use Nix.
Editor Integration
Neovim Setup
nff can be integrated into Neovim as a diagnostics source for nftables files. Here are several setup approaches:
Option 2: Using none-ls
local null_ls = require("null-ls")
null_ls.setup({
sources = {
-- nftables diagnostics
null_ls.builtins.diagnostics.nff.with({
command = "nff",
args = { "lint", "$FILENAME", "--json" },
format = "json",
check_exit_code = false,
filetypes = { "nftables" },
}),
-- nftables formatting
null_ls.builtins.formatting.nff.with({
command = "nff",
args = { "format", "$FILENAME", "--stdout" },
filetypes = { "nftables" },
}),
},
})
Option 2: Using nvim-lint (recommended)
-- ~/.config/nvim/lua/config/lint.lua
require('lint').linters.nff = {
cmd = 'nff',
stdin = false,
args = { 'lint', '%s', '--json' },
stream = 'stdout',
ignore_exitcode = true,
parser = function(output)
local diagnostics = {}
local ok, decoded = pcall(vim.fn.json_decode, output)
if not ok or not decoded.diagnostics then
return diagnostics
end
for _, diagnostic in ipairs(decoded.diagnostics) do
table.insert(diagnostics, {
lnum = diagnostic.range.start.line,
col = diagnostic.range.start.character,
severity = diagnostic.severity == "Error" and vim.diagnostic.severity.ERROR or vim.diagnostic.severity.WARN,
message = diagnostic.message,
source = "nff",
code = diagnostic.code,
})
end
return diagnostics
end,
}
-- Setup linting for nftables files
vim.api.nvim_create_autocmd({ "BufEnter", "BufWritePost" }, {
pattern = "*.nft",
callback = function()
require("lint").try_lint("nff")
end,
})
Option 3: Custom Lua Function
For a simple custom solution:
-- ~/.config/nvim/lua/nff.lua
local M = {}
function M.lint_nftables()
local filename = vim.fn.expand('%:p')
if vim.bo.filetype ~= 'nftables' then
return
end
local cmd = { 'nff', 'lint', filename, '--json' }
vim.fn.jobstart(cmd, {
stdout_buffered = true,
on_stdout = function(_, data)
if data then
local output = table.concat(data, '\n')
local ok, result = pcall(vim.fn.json_decode, output)
if ok and result.diagnostics then
local diagnostics = {}
for _, diag in ipairs(result.diagnostics) do
table.insert(diagnostics, {
lnum = diag.range.start.line,
col = diag.range.start.character,
severity = diag.severity == "Error" and vim.diagnostic.severity.ERROR or vim.diagnostic.severity.WARN,
message = diag.message,
source = "nff",
})
end
vim.diagnostic.set(vim.api.nvim_create_namespace('nff'), 0, diagnostics)
end
end
end,
})
end
-- Auto-run on save
vim.api.nvim_create_autocmd("BufWritePost", {
pattern = "*.nft",
callback = M.lint_nftables,
})
return M
Diagnostic Categories
nff provides comprehensive analysis across multiple categories:
Syntax Errors
- Parse errors with precise location information
- Missing tokens (semicolons, braces, etc.)
- Unexpected tokens
- Unterminated strings
- Invalid numbers
Semantic Validation
- Unknown table families (
inet
,ip
,ip6
, etc.) - Invalid chain types and hooks
- Incorrect priority values
- Missing chain policies
- Duplicate table/chain names
- Invalid CIDR notation
- Invalid port ranges
Style Warnings
- Missing shebang line
- Inconsistent indentation (mixed tabs/spaces)
- Trailing whitespace
- Lines exceeding maximum length (configurable)
- Excessive empty lines
- Preferred syntax alternatives
Best Practices
- Chains without explicit policies
- Rules without actions
- Overly permissive rules
- Duplicate or conflicting rules
- Unused variables or sets
- Deprecated syntax usage
- Missing documentation
- Security risks
Performance Hints
- Inefficient rule ordering
- Large sets without timeouts
- Missing counters where beneficial
JSON Output Format
When using --json
, nff outputs LSP-compatible diagnostics:
{
"diagnostics": [
{
"range": {
"start": { "line": 5, "character": 10 },
"end": { "line": 5, "character": 20 }
},
"severity": "Error",
"code": "NFT001",
"source": "nff",
"message": "Expected ';' after policy",
"related_information": [],
"code_actions": [],
"tags": []
}
],
"file_path": "config.nft",
"source_text": "..."
}
Diagnostic Codes
nff uses structured diagnostic codes for categorization:
- NFT001-NFT099: Syntax errors
- NFT101-NFT199: Semantic errors
- NFT201-NFT299: Style warnings
- NFT301-NFT399: Best practice recommendations
- NFT401-NFT499: Performance hints
- NFT501-NFT599: Formatting issues
- NFT601-NFT699: nftables-specific validations
Development
Testing
# Run test suite
cargo test
# Run with verbose output
cargo test -- --nocapture
# Test specific module
cargo test lexer
Code Quality
# Check compilation
cargo check
# Format code
cargo fmt
# Lint code
cargo clippy
# Check for unused dependencies
cargo machete
Supported nftables Features
Table Families
- inet - Dual-stack IPv4/IPv6 (most common)
- ip - IPv4 only
- ip6 - IPv6 only
- arp - ARP protocol
- bridge - Bridge/Layer 2
- netdev - Network device (ingress/egress)
Chain Types & Hooks
- filter:
input
,forward
,output
- nat:
prerouting
,input
,output
,postrouting
- route:
output
- security:
input
,forward
,output
Expression Types
- Protocol matching:
ip protocol tcp
,tcp dport 80
- Interface matching:
iifname "eth0"
,oifname "wlan0"
- Address matching:
ip saddr 192.168.1.0/24
,ip6 daddr ::1
- Connection tracking:
ct state established,related
- Port specifications:
tcp dport { 22, 80, 443 }
- Rate limiting:
limit rate 10/minute burst 5 packets
- Sets and maps: Named sets with timeout support
Actions & Statements
- Verdicts:
accept
,drop
,reject
,return
- NAT:
snat to 192.168.1.1
,dnat to 192.168.1.100:80
- Marking:
mark set 0x1
,ct mark set 0x1
- Logging:
log prefix "dropped: "
- Counters:
counter packets 0 bytes 0
Examples
Basic Firewall
Input (minified):
table inet firewall{chain input{type filter hook input priority 0;policy drop;ct state established,related accept;iifname lo accept;tcp dport 22 accept}}
Output (formatted):
#!/usr/sbin/nft -f
table inet firewall {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
iifname lo accept
tcp dport 22 accept
}
}
NAT Configuration
#!/usr/sbin/nft -f
table ip nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
iifname "eth0" tcp dport 80 dnat to 192.168.1.100:8080
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
oifname "eth0" masquerade
}
}
Rate Limiting
#!/usr/sbin/nft -f
table inet protection {
chain input {
type filter hook input priority 0; policy accept;
tcp dport 22 limit rate 5/minute burst 10 packets accept
tcp dport 22 drop
}
}
Diagnostics Examples
Error Detection
Input file with issues:
table inet firewall {
chain input {
type filter hook input priority 100
tcp dport 22 accept
}
}
Human-readable output:
Found 2 issues in config.nft:
config.nft:3:37: error: Expected ';' after policy [NFT001]
1: table inet firewall {
2: chain input {
→ 3: type filter hook input priority 100
4: tcp dport 22 accept
5: }
config.nft:3:1: warning: Filter chain should have an explicit policy [NFT301]
1: table inet firewall {
2: chain input {
→ 3: type filter hook input priority 100
4: tcp dport 22 accept
5: }
JSON output:
{
"diagnostics": [
{
"range": {
"start": { "line": 2, "character": 37 },
"end": { "line": 2, "character": 37 }
},
"severity": "Error",
"code": "NFT001",
"source": "nff",
"message": "Expected ';' after policy"
},
{
"range": {
"start": { "line": 2, "character": 0 },
"end": { "line": 2, "character": 37 }
},
"severity": "Warning",
"code": "NFT301",
"source": "nff",
"message": "Filter chain should have an explicit policy"
}
],
"file_path": "config.nft",
"source_text": "..."
}
Style Analysis
Input with style issues:
table inet test{chain input{type filter hook input priority 0;policy drop;tcp dport 22 accept;}}
Style warnings:
Found 3 issues in style.nft:
style.nft:1:1: warning: Consider adding a shebang line [NFT201]
style.nft:1:121: warning: Line too long (98 > 80 characters) [NFT205]
style.nft:1:16: warning: Missing space after '{' [NFT503]
Contributing
Code Style
- Follow
cargo fmt
formatting - Use
cargo clippy
recommendations - Maintain comprehensive documentation
- Add tests for new features
Testing Strategy (WIP)
- Unit tests: Individual component validation
- Integration tests: End-to-end formatting verification
- Regression tests: Known issue prevention
- Performance tests: Benchmark critical paths
Building
Build with cargo build
as usual. If you are using Nix, you will also want to
ensure that the Nix package builds as expected.
Technical Notes
CST Implementation
The Concrete Syntax Tree (CST) preserves all source information including:
- Whitespace and indentation
- Comments and their positions
- Token-level error recovery
- Lossless round-trip formatting
Parser Architecture
Below are the design goals of nff's architechture.
- Error recovery: Continues parsing after syntax errors
- Incremental parsing: Supports partial file processing
- Memory efficiency: Streaming token processing where possible
- Grammar completeness: Covers full nftables syntax specification
Diagnostic Architecture
The diagnostic system uses a modular architecture with specialized analyzers:
- Modular design: Each analyzer focuses on specific concerns (lexical, syntax, style, semantic)
- Configurable analysis: Enable/disable specific diagnostic categories
- LSP compatibility: JSON output follows Language Server Protocol standards
- Performance optimized: Concurrent analysis when possible
- Extensible: Easy to add new diagnostic rules and categories
License
nff is licensed under MPL v2.0. See license file for more details on what the license entails.