Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

squint

A fast SQL linter for dbt and Jinja SQL files, written in Rust. Modelled on sqlfluff and sqlfmt, with first-class Jinja support and a focus on speed.

Features

  • 36 rules covering capitalisation, layout, aliasing, references, structure, and more
  • Auto-fix — rewrite files in place with --fix, or check for drift with --check
  • Jinja-aware — tokenises {{ }} and {% %} blocks without stripping them
  • -- noqa suppression — per-line and per-rule (-- noqa: CP01,LT05)
  • -- fmt: off blocks — opt out of formatting for hand-crafted sections
  • Severity levels — configure rules as error (exit 1) or warning (exit 0)
  • JSON output — for CI dashboards, editor plugins, and scripts
  • LSP server — real-time diagnostics in VS Code, Neovim, Helix, and any LSP client
  • Fast — ~100 µs to lint a 240-line file; 64 files in ~1.4 ms on a modern laptop

Design philosophy

The linter enforces lowercase SQL by default — keywords, function names, type names, boolean/null literals, and unquoted identifiers. This matches the style used by sqlfmt and is the default for most dbt projects. All capitalisation rules are auto-fixable.

The linter has no AST — it operates on a flat token list produced by a Logos-based DFA tokeniser. This keeps the architecture simple and makes it easy to add new rules.

Getting help

Installation

pip (Python projects)

If your project already uses pip or a requirements.txt, this is the easiest option:

pip install pysquint

This downloads a pre-built binary wheel for your platform — no Rust toolchain needed. Works with pip ≥ 21, Python ≥ 3.8.

For dbt projects using uv:

uv add --dev pysquint

From crates.io

cargo install squint-linter

This compiles from source. Requires Rust stable.

From source

git clone https://github.com/IlllIIIlllIlIlIIllllIIIlI/squint
cd squint
cargo install --path .

Pre-built binaries

Pre-built binaries for Linux (x86_64, aarch64), macOS (x86_64, Apple Silicon), and Windows (x86_64) are attached to each GitHub Release.

Download the archive for your platform, extract it, and place the binary somewhere on your PATH.

LSP server (optional)

squint-lsp is included in the pre-built release binaries. Download it from the GitHub Releases page alongside the main squint binary.

To build from source instead:

cargo install squint-linter --features lsp --bin squint-lsp

See Editor Integration for setup instructions.

Verify installation

squint --version

pre-commit

No manual installation needed — pre-commit builds the binary from source on first use. See pre-commit Integration.

Quick Start

Lint a directory

squint models/

Output:

models/staging/stg_orders.sql:3:1: [error] [CP01] Expected lowercase keyword, got 'SELECT'
models/staging/stg_orders.sql:3:8: [error] [CP01] Expected lowercase keyword, got 'FROM'
models/marts/fct_orders.sql:12:1: [error] [LT05] Line too long (143 > 120 characters)

Found 3 errors in 2 files.

Auto-fix violations

squint --fix models/

Rewrites files in place. All fixable rules are applied (see the fixable column in the rules reference). After fixing, any remaining unfixable violations are reported.

CI gate

squint --check models/

Exits 1 if any file would be changed by --fix. Does not write any files. Use this in CI to enforce that committed SQL is already formatted:

# .github/workflows/ci.yml
- name: Check SQL formatting
  run: squint --check models/

Run specific rules

squint --rules CP01,LT05 models/

Lint from stdin

cat models/my_model.sql | squint --stdin-filename models/my_model.sql

Typical dbt project setup

  1. Add a squint.toml at the project root (see Configuration)
  2. Add a pre-commit hook (see pre-commit Integration)
  3. Add squint --check models/ to your CI pipeline

That’s it. The linter walks directories recursively, respects .gitignore, and processes files in parallel.

CLI Reference

squint [OPTIONS] [FILES]...

Files and directories can be mixed as positional arguments. Directories are walked recursively for *.sql files, respecting .gitignore.

Options

FlagDescription
--fixRewrite files in place, applying all auto-fixable violations. Remaining violations are reported after fixing. Cannot be combined with --check.
--checkExit 1 if any file would be changed by --fix, without writing. Standard CI gate. Cannot be combined with --fix.
--rules <IDs>Comma-separated rule IDs to run, e.g. CP01,LT05. Runs all rules by default.
--max-line-length <N>Override the LT05 line length limit. Equivalent to [rules.LT05] max_line_length = N in config.
-q, --quietShow only the violation count per file, not individual violations.
--format <fmt>Output format: text (default) or json.
--output <file>Write violations to a file in addition to stdout.
--exclude <pattern>Glob pattern to exclude (repeatable). Merged with the exclude list in config.
--stdin-filename <NAME>Read SQL from stdin and report violations under this filename. Cannot be combined with positional file arguments.
--versionPrint version and exit.
-h, --helpPrint help and exit.

Exit codes

CodeMeaning
0No error-severity violations (warnings are allowed)
1One or more error-severity violations; or --check detected drift
2I/O error (file not found, permission denied, etc.)

Output format

Text (default)

path/to/file.sql:LINE:COL: [SEVERITY] [RULE_ID] message

Example:

models/stg_orders.sql:3:1: [error] [CP01] Expected lowercase keyword, got 'SELECT'
models/stg_orders.sql:5:45: [warning] [LT05] Line too long (143 > 120 characters)

Found 1 error and 1 warning in 1 file.

JSON (--format json)

See JSON Output.

Environment

The linter reads squint.toml by walking up from the current working directory. CLI flags always override config file values.

Configuration

squint looks for config in your project root (or any ancestor directory), checking two locations in order at each level:

  1. squint.toml — dedicated config file
  2. pyproject.toml — under the [tool.squint] section

The search walks up from the current working directory and stops at the first match. CLI flags always override config file values.

squint.toml

# Glob patterns to exclude from linting (relative to the config file).
# Merged with any --exclude flags passed on the command line.
exclude = [
    "target/**",
    "**/node_modules/**",
    "vendor/*.sql",
]

# ── Layout ────────────────────────────────────────────────────────────────────

[rules.LT05]
max_line_length = 120        # default: 120
ignore_comment_lines = false # parsed but not yet enforced

# ── Aliasing ──────────────────────────────────────────────────────────────────

[rules.AL06]
min_alias_length = 1         # default: 1  (0 = no minimum)
max_alias_length = 0         # default: 0  (0 = no maximum)

# ── Convention ────────────────────────────────────────────────────────────────

[rules.CV03]
# "forbid" (default) — trailing comma before FROM is not allowed
# "require"          — trailing comma before FROM is required
select_clause_trailing_comma = "forbid"

[rules.CV04]
# false (default) — require COUNT(*)
# true            — require COUNT(1)
prefer_count_1 = false

# ── Ambiguous ─────────────────────────────────────────────────────────────────

[rules.AM06]
# "explicit"   (default) — all GROUP BY / ORDER BY refs must be column names
# "implicit"             — all refs must be positional numbers
# "consistent"           — either style allowed, but not mixed within one clause
group_by_and_order_by_style = "explicit"

# ── Severity overrides ────────────────────────────────────────────────────────
# All rules default to "error". Override per rule to "warning" (reported but
# exit 0) or back to "error".

[rules.severity]
LT05 = "warning"   # long lines are warnings, not hard errors
CP02 = "warning"   # identifier casing is a warning during rollout

pyproject.toml

For Python projects, all the same options are available under [tool.squint]:

[tool.squint]
exclude = ["target/**"]

[tool.squint.rules.LT05]
max_line_length = 88

[tool.squint.rules.severity]
LT05 = "warning"

squint only stops at a pyproject.toml if it contains a [tool.squint] section — a pyproject.toml without one is skipped and the walk continues upward.

Severity levels

See Severity Levels for a full explanation of how error vs warning affects exit codes, output, and JSON.

Disabling rules

There is no per-rule enabled = false option. To skip a rule entirely, use --rules on the command line to run only the rules you want:

squint --rules CP01,LT03,LT05 models/

Or suppress individual lines with -- noqa — see Suppression.

Suppression

Two mechanisms let you opt out of linting for specific lines or blocks.

-- noqa — per-line suppression

Add -- noqa to the end of a line to suppress all rule violations on that line:

SELECT A, B, C  -- noqa
FROM MY_TABLE   -- noqa

To suppress only specific rules, list them after a colon:

SELECT A  -- noqa: CP01
FROM T    -- noqa: CP01, LT05

Rule IDs are case-insensitive: -- noqa: cp01 and -- noqa: CP01 are equivalent.

-- noqa also suppresses auto-fixes — a --fix run will not modify a line with a bare -- noqa annotation.

Combining multiple rules

SELECT A  -- noqa: CP01, LT05, AL02

When to use it

Use -- noqa for intentional one-off exceptions: a line that must be uppercase for a specific reason, a known long line that can’t be shortened, etc. For recurring patterns, prefer a config override or -- fmt: off blocks.


-- fmt: off / -- fmt: on — block suppression

Suppress all rules (violations and fixes) for a block of SQL:

-- fmt: off
SELECT A,      B,
       C       -- hand-crafted alignment
FROM T
-- fmt: on

select d from u  -- linting resumes here

Inline -- fmt: off

When -- fmt: off appears after SQL on the same line, it suppresses that line only:

select a from t  -- fmt: off
select b from u  -- this line is linted normally

To end of file

A standalone -- fmt: off with no matching -- fmt: on suppresses from that line to the end of the file:

select a from t

-- fmt: off
-- Everything below this line is suppressed
SELECT B FROM U

Interaction between mechanisms

-- noqa and -- fmt: off are independent. A line inside a -- fmt: off block that also has -- noqa is doubly suppressed — both mechanisms apply, but the result is the same: no violations reported, no fixes applied.

Severity Levels

Every rule has a severity: error or warning. All rules default to error.

Effect on exit code

SeverityReportedAffects exit code
errorYesYes — exit 1 if any errors exist
warningYesNo — exit 0 even if warnings exist

This lets teams adopt rules incrementally: configure new rules as warning during a rollout period, then promote them to error once the codebase is clean.

Configuring severity

Override per rule in squint.toml:

[rules.severity]
LT05 = "warning"   # long lines are warnings
CP02 = "warning"   # identifier casing is a warning during rollout
CP01 = "error"     # keyword casing is always an error

Only rules you want to change need to appear here. All others remain at their default (error).

Text output

Severity is shown in brackets before the rule ID:

models/stg_orders.sql:3:1: [error] [CP01] Expected lowercase keyword, got 'SELECT'
models/stg_orders.sql:5:45: [warning] [LT05] Line too long (143 > 120 characters)

The summary line distinguishes errors from warnings:

Found 1 error and 1 warning in 1 file.

JSON output

The severity field is included per violation:

{
  "line": 3,
  "col": 1,
  "rule_id": "CP01",
  "message": "Expected lowercase keyword, got 'SELECT'",
  "severity": "error"
}

LSP diagnostics

The LSP server maps errorDiagnosticSeverity::ERROR and warningDiagnosticSeverity::WARNING, so your editor shows them with the appropriate indicators (red vs yellow underlines in most themes).

pre-commit Integration

pre-commit is the standard hook manager for dbt projects. The linter ships a .pre-commit-hooks.yaml so you can add it directly.

Setup

Add to your .pre-commit-config.yaml:

repos:
  - repo: https://github.com/IlllIIIlllIlIlIIllllIIIlI/squint
    rev: v1.0.1   # pin to a release tag
    hooks:
      - id: squint

Run pre-commit install once to register the hooks. On the first run, pre-commit builds the binary from source (requires Rust on the machine). The binary is cached for subsequent runs.

Available hooks

Hook IDWhat it does
squintLint staged SQL files and exit 1 on any violations
squint-fixAuto-fix staged SQL files in place

Lint only

- id: squint

Fails the commit if any violations are found. The developer must fix violations (or suppress them with -- noqa) before committing.

Auto-fix on commit

- id: squint-fix

Rewrites staged files in place before the commit. If files are modified, pre-commit will abort the commit and ask you to git add the fixed files. This is the most frictionless setup for teams.

Config file

The linter reads squint.toml from the working directory at hook execution time. Project-level config (severity overrides, line length, etc.) is applied automatically.

Pinning a version

Always pin to a tag (rev: v1.0.1) rather than a branch name. pre-commit caches the binary per rev, so floating tags like main defeat caching and force a rebuild on every run.

Editor Integration (LSP)

The linter ships a Language Server Protocol (LSP) server binary that provides real-time diagnostics in any LSP-capable editor.

Building the LSP server

The LSP server is feature-gated. Build it with:

cargo install squint-linter --features lsp --bin squint-lsp

Or from source:

cargo build --release --features lsp --bin squint-lsp
# Binary: target/release/squint-lsp

Protocol

  • Transport: stdio
  • Document sync: full (re-lints the full document on every change)
  • Capabilities: textDocument/publishDiagnostics
  • Config: reads squint.toml or [tool.squint] in pyproject.toml from the working directory at startup

Neovim (nvim-lspconfig)

local lspconfig = require('lspconfig')
local configs = require('lspconfig.configs')

if not configs.squint then
  configs.squint = {
    default_config = {
      cmd = { vim.fn.expand('~/.cargo/bin/squint-lsp') },
      filetypes = { 'sql' },
      root_dir = lspconfig.util.root_pattern('squint.toml', 'pyproject.toml', '.git'),
      settings = {},
    },
  }
end

lspconfig.squint.setup {}

Helix (languages.toml)

[[language]]
name = "sql"
language-servers = ["squint"]

[language-server.squint]
command = "squint-lsp"

VS Code

A minimal extension is included in the repository under editors/vscode/. It is not yet published to the Marketplace, so install it manually:

cd editors/vscode
npm install
npm run compile
npm run package        # produces squint-0.1.0.vsix
code --install-extension squint-0.1.0.vsix

The extension activates automatically for sql files. For jinja-sql support, also install the Better Jinja grammar extension.

Configuration

SettingDefaultDescription
squint.serverPath""Path to the squint-lsp binary. Leave empty to use PATH or ~/.cargo/bin.

Severity in diagnostics

The LSP server maps rule severity to LSP diagnostic severity:

  • errorDiagnosticSeverity::ERROR (red underline in most themes)
  • warningDiagnosticSeverity::WARNING (yellow underline)

Per-rule severity overrides in squint.toml are respected.

JSON Output

Use --format json to get machine-readable output. Useful for CI dashboards, editor plugins, SARIF converters, and scripts.

squint --format json models/

Shape

A JSON array — one object per file processed (including files with zero violations):

[
  {
    "path": "models/staging/stg_orders.sql",
    "violations": [
      {
        "line": 3,
        "col": 1,
        "rule_id": "CP01",
        "message": "Expected lowercase keyword, got 'SELECT'",
        "severity": "error"
      },
      {
        "line": 5,
        "col": 45,
        "rule_id": "LT05",
        "message": "Line too long (143 > 120 characters)",
        "severity": "warning"
      }
    ],
    "fixed": false
  },
  {
    "path": "models/staging/stg_customers.sql",
    "violations": [],
    "fixed": false
  }
]

Fields

FieldTypeDescription
pathstringFile path as given on the command line or walked from a directory
violationsarrayList of violations; empty array if the file is clean
violations[].lineinteger1-based line number
violations[].colinteger1-based column number
violations[].rule_idstringRule ID, e.g. "CP01"
violations[].messagestringHuman-readable violation message
violations[].severitystring"error" or "warning"
fixedbooleantrue if --fix was passed and the file was modified

Combined with –fix

squint --fix --format json models/

"fixed": true is set for files that were rewritten. Violations in the output are the remaining violations after fixing (unfixable rules that still triggered).

Piping to jq

# Count total violations
squint --format json models/ | jq '[.[].violations | length] | add'

# Show only errors
squint --format json models/ | jq '
  [.[] | select(.violations | length > 0) | {
    path,
    errors: [.violations[] | select(.severity == "error")]
  } | select(.errors | length > 0)]
'

# Files with violations
squint --format json models/ | jq '.[] | select(.violations | length > 0) | .path'

All Rules

36 rules across 8 categories. Fixable rules are marked — run --fix to apply them automatically.

Capitalisation

IDDescriptionFixable
CP01Keywords must be lowercase
CP02Unquoted identifiers must be lowercase
CP03Function names must be lowercase
CP04Boolean/null literals must be lowercase
CP05Data type names must be lowercase

Layout

IDDescriptionFixable
LT01No space before comma; no consecutive mid-line spaces; no space between function and (
LT02Indentation must use spaces, multiple of 4
LT03No trailing whitespace on lines
LT05Lines must not exceed max_line_length (default 120)
LT06No space between function name and (
LT07CTE closing ) must be on its own line
LT08Blank line required after each CTE closing )
LT09Clauses in standard order: SELECT → FROM → WHERE → GROUP BY → HAVING → ORDER BY → LIMIT
LT10DISTINCT/ALL must be on the same line as SELECT
LT11Set operators (UNION, INTERSECT, EXCEPT) must be on their own line
LT12File must end with exactly one trailing newline

Convention

IDDescriptionFixable
CV03Trailing comma policy in SELECT clauses
CV04Consistent row-counting syntax: COUNT(*) vs COUNT(1)
CV05NULL comparisons must use IS NULL / IS NOT NULL
CV10Identifiers must use a consistent quoting style within a file

Aliasing

IDDescriptionFixable
AL02Column aliases must use explicit AS keyword
AL03Expressions in SELECT must have an alias
AL04Table aliases must be unique within a query
AL05Table aliases that are defined but never referenced
AL06Table alias length must be within configured bounds
AL08Column aliases in SELECT must be unique (case-insensitive)
AL09A column must not be aliased to itself (col AS col)

Ambiguous

IDDescriptionFixable
AM01SELECT DISTINCT with GROUP BY is redundant
AM02UNION must be followed by ALL or DISTINCT
AM05Implicit comma joins are forbidden; use explicit JOIN
AM06GROUP BY / ORDER BY must use a consistent reference style

References

IDDescriptionFixable
RF01Qualified column references must use a known table alias
RF02Wildcard (SELECT *) is not allowed; list columns explicitly

Structure

IDDescriptionFixable
ST03CTEs that are defined but never referenced
ST08COUNT(DISTINCT *) is not valid SQL

Jinja

IDDescriptionFixable
JJ01Jinja tags must have single-space padding inside delimiters

Capitalisation Rules (CP)

All capitalisation rules enforce lowercase. This matches the style used by sqlfmt and is the default for most dbt projects. All five rules are auto-fixable with --fix.


CP01

Keywords must be lowercase.

Covers: SELECT, FROM, WHERE, GROUP BY, ORDER BY, HAVING, LIMIT, JOIN, UNION, AND, OR, NOT, IN, IS, LIKE, BETWEEN, CASE, WHEN, THEN, ELSE, END, AS, DISTINCT, ALL, ON, WITH, and all other SQL keywords.

-- bad
SELECT id, name
FROM orders
WHERE status = 'active'

-- good
select id, name
from orders
where status = 'active'

CP02

Unquoted identifiers must be lowercase.

Applies to table names, column names, CTE names, and aliases that are written without quotes. Quoted identifiers (e.g. "MyTable") are exempt.

-- bad
select OrderId, CustomerName
from Orders o

-- good
select orderid, customername
from orders o

CP03

Function names must be lowercase.

Applies to any name token immediately followed by (.

-- bad
SELECT COUNT(id), COALESCE(name, 'unknown'), UPPER(email)
FROM customers

-- good
select count(id), coalesce(name, 'unknown'), upper(email)
from customers

CP04

Boolean and null literals must be lowercase.

Covers TRUE, FALSE, and NULL.

-- bad
select id
from orders
where is_active = TRUE and deleted_at IS NULL

-- good
select id
from orders
where is_active = true and deleted_at is null

CP05

Data type names must be lowercase.

Covers: int, integer, bigint, smallint, float, double, decimal, numeric, varchar, char, text, boolean, bool, date, time, timestamp, timestamptz, json, jsonb, uuid, and others.

-- bad (in a CAST expression)
select cast(id as VARCHAR(36)), cast(amount as DECIMAL(10, 2))
from orders

-- good
select cast(id as varchar(36)), cast(amount as decimal(10, 2))
from orders

Layout Rules (LT)


LT01

No space before comma; no consecutive mid-line spaces; no space between function name and (.

Auto-fixable. Three related spacing issues in one rule:

-- bad: space before comma
select a , b , c from t

-- good
select a, b, c from t
-- bad: consecutive spaces mid-line
select  a  from  t

-- good
select a from t
-- bad: space between function and (
select count (*), coalesce (name, 'x') from t

-- good
select count(*), coalesce(name, 'x') from t

LT01 does not touch indentation (spaces at the start of a line). That is LT02’s job.


LT02

Indentation must use spaces (no tabs) and be a multiple of 4.

-- bad: tab indentation
select
	id,
	name
from t

-- bad: 2-space indentation
select
  id,
  name
from t

-- good
select
    id,
    name
from t

LT03

Lines must not have trailing whitespace (spaces or tabs before the newline).

Auto-fixable. Trailing whitespace is invisible in most editors but causes noisy diffs and is flagged by many code review tools.


LT05

Lines must not exceed max_line_length characters (default: 120).

Configure in squint.toml:

[rules.LT05]
max_line_length = 100

Or override on the command line:

squint --max-line-length 100 models/

LT06

No space between a function name and (.

Auto-fixable. Equivalent to the function-paren check in LT01, but reported separately with a more targeted message.

-- bad
select count (id) from t

-- good
select count(id) from t

LT07

CTE closing ) must be on its own line.

-- bad
with cte as (select id from t)
select id from cte

-- good
with cte as (
    select id from t
)
select id from cte

LT08

A blank line is required after each CTE closing ).

-- bad
with cte as (
    select id from t
)
select id from cte   -- no blank line after the closing )

-- good
with cte as (
    select id from t
)

select id from cte

LT09

SQL clauses must appear in the standard order.

Expected order: SELECT → FROM → WHERE → GROUP BY → HAVING → ORDER BY → LIMIT

All clauses are optional; the rule only fires when a clause appears after one with a higher rank in the expected sequence.

-- bad: WHERE before FROM
select a
where a = 1
from t

-- bad: ORDER BY before GROUP BY
select a, count(*)
from t
group by a
order by a
having count(*) > 1   -- HAVING after ORDER BY is out of order

-- good
select a, count(*)
from t
where a is not null
group by a
having count(*) > 1
order by a
limit 100

Subqueries and CTEs each have their own independent clause order context. UNION ALL resets the context for each SELECT.


LT10

DISTINCT / ALL must be on the same line as SELECT.

-- bad
select
distinct a, b
from t

-- good
select distinct a, b
from t

LT11

Set operators (UNION, INTERSECT, EXCEPT) must be on their own line.

-- bad
select a from t union all select a from u

-- good
select a from t
union all
select a from u

LT12

File must end with exactly one trailing newline.

Auto-fixable. Files with no trailing newline get one added; files with multiple trailing newlines have the extras removed.

Convention Rules (CV)


CV03

Trailing comma policy in SELECT clauses.

Configure whether a trailing comma before FROM is forbidden (default) or required:

[rules.CV03]
select_clause_trailing_comma = "forbid"   # default
# select_clause_trailing_comma = "require"

forbid (default)

-- bad: trailing comma
select
    id,
    name,
from orders

-- good
select
    id,
    name
from orders

require

-- bad: missing trailing comma
select
    id,
    name
from orders

-- good
select
    id,
    name,
from orders

CV04

Consistent row-counting syntax: COUNT(*) vs COUNT(1).

Configure which form is required:

[rules.CV04]
prefer_count_1 = false   # default: require COUNT(*)
# prefer_count_1 = true  # require COUNT(1)
-- bad (with default config)
select count(1) from orders

-- good
select count(*) from orders

CV05

NULL comparisons must use IS NULL / IS NOT NULL.

Auto-fixable. Using = NULL or != NULL is never correct — SQL’s three-valued logic means those comparisons always return NULL, not TRUE or FALSE.

-- bad
select id from orders where deleted_at = null
select id from orders where status != null

-- good (auto-fixed)
select id from orders where deleted_at is null
select id from orders where status is not null

CV10

Identifiers must use a consistent quoting style within a file.

Mixing quoted and unquoted forms of the same identifier in a single file is flagged. Only the first inconsistency per identifier name is reported.

Handles double quotes ("col"), backticks (`col`), and brackets ([col]).

-- bad: 'col' used both unquoted and quoted
select col, "col" from t

-- bad: 'my_table' used both ways
select a
from my_table t
join "my_table" u on t.id = u.id

-- good: consistent throughout
select col from my_table

-- also good: consistently quoted
select "col" from "my_table"

Note: The comparison is case-insensitive. COL (unquoted) and "col" (quoted) are considered the same identifier for this rule.

Aliasing Rules (AL)


AL02

Column aliases must use the explicit AS keyword.

-- bad: implicit alias (no AS)
select id order_id, name customer_name from orders

-- good
select id as order_id, name as customer_name from orders

AL03

Expressions in SELECT must have an alias.

Bare column names do not need an alias. Function calls, operators, and multi-token expressions do.

-- bad
select count(id), price * 1.1, coalesce(name, 'unknown')
from orders

-- good
select
    count(id) as order_count,
    price * 1.1 as price_with_tax,
    coalesce(name, 'unknown') as display_name
from orders

AL04

Table aliases must be unique within a query.

-- bad: 'o' used twice
select o.id, o.name
from orders o
join order_items o on o.order_id = o.id

-- good
select o.id, oi.name
from orders o
join order_items oi on o.id = oi.order_id

AL05

Table aliases that are defined but never referenced.

-- bad: alias 'o' defined but never used
select id, name from orders o

-- good: alias used
select o.id, o.name from orders o

-- also good: no alias at all
select id, name from orders

AL06

Table alias length must be within configured bounds.

Configure minimum and maximum alias length:

[rules.AL06]
min_alias_length = 1   # default: 1  (0 = no minimum)
max_alias_length = 0   # default: 0  (0 = no maximum)

Example with min_alias_length = 3:

-- bad: alias 'o' is too short (< 3)
select o.id from orders o

-- good
select ord.id from orders ord

AL08

Column aliases in SELECT must be unique (case-insensitive).

-- bad: 'id' used twice as alias
select
    order_id as id,
    customer_id as id
from orders

-- good
select
    order_id,
    customer_id
from orders

AL09

A column must not be aliased to itself (col AS col).

-- bad: redundant self-alias
select id as id, name as name from orders

-- good
select id, name from orders

Ambiguous Rules (AM)


AM01

SELECT DISTINCT with GROUP BY is redundant.

DISTINCT has no effect when results are already deduplicated by GROUP BY. One of them should be removed.

-- bad: DISTINCT is redundant
select distinct status, count(*)
from orders
group by status

-- good
select status, count(*)
from orders
group by status

AM02

UNION must be followed by ALL or DISTINCT.

Bare UNION is ambiguous — in most databases it implies DISTINCT, but this is easily confused with UNION ALL. Be explicit.

-- bad: ambiguous UNION
select a from t
union
select a from u

-- good
select a from t
union all
select a from u

-- also good
select a from t
union distinct
select a from u

AM05

Implicit comma joins in FROM are forbidden; use explicit JOIN syntax.

Comma-separated tables in FROM are an old SQL syntax for cross joins / implicit inner joins. They are harder to read and error-prone.

-- bad: implicit join
select a.id, b.name
from orders a, customers b
where a.customer_id = b.id

-- good
select a.id, b.name
from orders a
inner join customers b on a.customer_id = b.id

AM06

GROUP BY / ORDER BY must use a consistent reference style.

Configure the required style:

[rules.AM06]
group_by_and_order_by_style = "explicit"    # default
# group_by_and_order_by_style = "implicit"
# group_by_and_order_by_style = "consistent"
StyleMeaning
explicit (default)All references must be column names — positional numbers are flagged
implicitAll references must be positional numbers — column names are flagged
consistentEither style is allowed, but mixing within one clause is flagged

explicit (default):

-- bad: positional reference
select status, count(*) from orders group by 1

-- good
select status, count(*) from orders group by status

implicit:

-- bad: named reference
select status, count(*) from orders group by status

-- good
select status, count(*) from orders group by 1

References Rules (RF)


RF01

Qualified column references must use a known table alias.

When a column is qualified (alias.column), the qualifier must be an alias (or bare table name) that was defined in the FROM clause of the same query.

-- bad: 'x' is not a known alias
select x.id from orders o where x.status = 'active'

-- good
select o.id from orders o where o.status = 'active'

Note: RF01 is best-effort. It may produce false negatives for complex subquery and CTE patterns where aliases are defined in nested scopes.


RF02

Wildcard column references are not allowed; list columns explicitly.

SELECT * and SELECT table.* are flagged. In production SQL, implicit column lists cause breakage when upstream tables change.

-- bad
select * from orders
select o.* from orders o

-- good: list columns explicitly
select id, status, created_at from orders

Exceptions — not flagged:

  • COUNT(*) — the star is inside function parens, not a column reference
  • COUNT(DISTINCT *) — same
  • Arithmetic: select a * b from t — star preceded by a value token is an operator
-- ok: COUNT(*) is not a wildcard column reference
select count(*) from orders

-- ok: arithmetic multiplication
select price * quantity as total from order_items

Structure Rules (ST)


ST03

CTEs that are defined but never referenced.

An unused CTE is dead code — it adds query planning overhead and confuses readers.

-- bad: 'unused_cte' is never referenced
with
    used_cte as (select id from orders),
    unused_cte as (select id from customers)
select id from used_cte

-- good
with used_cte as (
    select id from orders
)
select id from used_cte

ST08

COUNT(DISTINCT *) is not valid SQL.

DISTINCT * inside COUNT is not standard SQL and is rejected by most databases. Use COUNT(DISTINCT col) with an explicit column name instead.

-- bad
select count(distinct *) from orders

-- good
select count(distinct id) from orders

Jinja Rules (JJ)


JJ01

Jinja tags must have single-space padding inside their delimiters.

This applies to both expression tags ({{ }}) and statement tags ({% %}).

-- bad: no spaces inside delimiters
select {{my_col}} from {{ref('my_model')}}
{% if condition %}

-- good
select {{ my_col }} from {{ ref('my_model') }}
{% if condition %}

Details

PatternExampleStatus
Expression, no padding{{col}}Flagged
Expression, correct padding{{ col }}OK
Statement, no padding{%if cond%}Flagged
Statement, correct padding{% if cond %}OK
Strip whitespace modifier{%- if cond -%}OK

The rule checks the raw tag text including delimiters. It does not parse the Jinja expression content.

Contributing

See CONTRIBUTING.md in the repository for the full contributing guide, including:

  • Prerequisites and getting started
  • Running tests and code quality gates
  • Commit conventions (Conventional Commits)
  • Step-by-step guide to adding a new rule
  • Key implementation notes (token model, clause context, bracket depth)
  • Pull request checklist

Quick reference

# Clone and build
git clone https://github.com/IlllIIIlllIlIlIIllllIIIlI/squint
cd squint
cargo build

# Run all tests
cargo test

# Code quality (same checks as CI)
cargo fmt --check
cargo clippy -- -D warnings
cargo test
cargo deny check

# Fuzz testing (requires nightly)
cargo install cargo-fuzz
cargo +nightly fuzz run fuzz_lint fuzz/seeds/fuzz_lint -- -max_total_time=60

Architecture

The linter pipeline:

Source text
  └─ Lexer (logos DFA)
       └─ Vec<LexedToken>    zero-copy slices into source
            └─ Parser
                 └─ Vec<Node>    flat token list, no AST
                      └─ Rules
                           └─ Vec<Violation>
                           └─ Vec<Fix>

Key design decisions:

  • No AST. Rules operate on a flat Vec<Node> with bracket depth tracking. This keeps the architecture simple and rules easy to write.
  • Zero-copy tokens. Node::value is a &'src str slice into the original source — no heap allocation for token text.
  • Suppression is transparent. Rules emit every violation they find. lint_source filters out suppressed lines after collecting from all rules.

Adding a rule

See the Contributing Guide for the 6-step walkthrough with full code examples.