docs icon indicating copy to clipboard operation
docs copied to clipboard

ReDim example doesn't use ReDim

Open BillWagner opened this issue 3 months ago • 2 comments

Type of issue

Missing information

Description

The example for the ReDim statement doesn't use arrays or ReDim. It should use that instead of a List.

Page URL

https://learn.microsoft.com/en-us/dotnet/visual-basic/language-reference/statements/redim-statement

Content source URL

https://github.com/dotnet/docs/blob/main/docs/visual-basic/language-reference/statements/redim-statement.md

Document Version Independent Id

feb113f1-26c6-b103-b27a-bbce9f150dac

Platform Id

0a6efd18-8a8e-95f4-7e77-684bf73d285f

Article author

@BillWagner

Metadata

  • ID: a1565b01-9263-3e6a-c1e8-3be60f55a379
  • PlatformId: 0a6efd18-8a8e-95f4-7e77-684bf73d285f
  • Service: dotnet-visualbasic

Related Issues

BillWagner avatar Oct 31 '25 13:10 BillWagner

@rishabhjain1712

rishabhjain1712 avatar Nov 08 '25 10:11 rishabhjain1712

What I would do

added more detailed plane below call the plan.

@BillWagner let me know what you think

As there are multiple things going on To fix them all for entire docs set would switch to go but this is just one are for now

Step 10

# 1) Confirm Class1.vb exists
ls -l samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB/Class1.vb

# 2) List existing classN.vb files (samples and docs)
ls -1 samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB | sed -n '1,200p'
ls -1 docs/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB | sed -n '1,200p' || true

# 3) List .md files we will touch
ls -ld docs/visual-basic/language-reference/statements
ls -1 docs/visual-basic/language-reference/statements/*.md

# 4) Snippet ids embedded in Class1.vb
grep -oE '<Snippet[0-9]+>' samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB/Class1.vb \
  | sed -E 's/<Snippet([0-9]+)>/\1/' | sort -n -u

# 5) Which statement pages reference Class1.vb#N (and which ids)
grep -RIn --line-number 'Class1.vb#[0-9]\+' docs/visual-basic/language-reference/statements || true
grep -RhoE 'Class1\.vb#[0-9]+' docs/visual-basic/language-reference/statements \
  | sed -E 's/.*#([0-9]+)/\1/' | sort -n -u || true

# 6) Show ms.assetid in each statements .md (first 60 lines)
for md in docs/visual-basic/language-reference/statements/*.md; do
  printf "%s : " "$md"
  awk 'NR<=60 && tolower($0) ~ /^ms\.assetid:/ { print $0; exit }' "$md" || echo "(no ms.assetid)"
done


Step 20 Then Wish to do this in dry run first basically searching for asset id and confirming from there

docs/visual-basic/language-reference/statements

#!/usr/bin/env bash
set -euo pipefail

SRC_CLASS1="samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB/Class1.vb"
SAMPLES_DIR="samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB"
DOCS_SNIPPETS_DIR="docs/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB"
MD_DIR="docs/visual-basic/language-reference/statements"

echo "1) Existence checks"
[ -f "$SRC_CLASS1" ] && echo " - Found Class1.vb: $SRC_CLASS1" || echo " - MISSING: $SRC_CLASS1"
echo " - sample snippet files (samples):"
ls -1 "$SAMPLES_DIR" 2>/dev/null | sed -n '1,200p' || echo "   (none)"
echo " - sample snippet files (docs):"
ls -1 "$DOCS_SNIPPETS_DIR" 2>/dev/null | sed -n '1,200p' || echo "   (none)"
echo " - statement md files:"
ls -1 "$MD_DIR"/*.md 2>/dev/null | sed -n '1,200p' || echo "   (none)"
echo

echo "2) Snippet ids embedded in Class1.vb"
if [ -f "$SRC_CLASS1" ]; then
  grep -oE '<Snippet[0-9]+>' "$SRC_CLASS1" | sed -E 's/<Snippet([0-9]+)>/\1/' | sort -n -u || true
else
  echo "  (Class1.vb not found)"
fi
echo

echo "3) Snippet ids referenced from statements .md files (Class1.vb#N)"
grep -RhoE 'Class1\.vb#[0-9]+' "$MD_DIR" 2>/dev/null | sed -E 's/.*#([0-9]+)/\1/' | sort -n -u || echo "  (no references found)"
echo

echo "4) For each statements .md: ms.assetid (first 60 lines) and matching .vb files containing that UUID"
printf "md_file,ms_assetid,matched_vb_paths\n"
for md in "$MD_DIR"/*.md; do
  [ -f "$md" ] || continue
  assetid=$(awk 'NR<=60 && tolower($0) ~ /^ms\.assetid:/{print $0; exit}' "$md" \
            | sed -E 's/^[[:space:]]*ms\.assetid:[[:space:]]*//I' || true)
  if [ -z "$assetid" ]; then
    printf "%s,%s,%s\n" "$md" "(none)" "(no match)"
    continue
  fi
  matches=()
  for base in "$SAMPLES_DIR" "$DOCS_SNIPPETS_DIR"; do
    [ -d "$base" ] || continue
    while IFS= read -r -d $'\0' f; do
      if grep -qF "$assetid" "$f"; then
        matches+=("$f")
      fi
    done < <(find "$base" -type f -name '*.vb' -print0)
  done
  if [ ${#matches[@]} -eq 0 ]; then
    printf "%s,%s,%s\n" "$md" "$assetid" "(no match)"
  else
    joined="$(printf "%s;" "${matches[@]}")"
    joined="${joined%;}"
    printf "%s,%s,%s\n" "$md" "$assetid" "$joined"
  fi
done

the plan

if the classNN.vb files already exist, the missing step is to sync the ms.assetid UUIDs from the VB files into the Markdown YAML so pages and snippets are reliably linked. Below is a safe, practical plan plus a ready-to-run script that does exactly that: it finds the VB file for each statements .md, extracts the UUID from the VB, and inserts or updates ms.assetid in the Markdown YAML. The script runs in dry‑run mode by default and only writes when you pass --apply.

What the script does (summary) Scans docs/visual-basic/language-reference/statements/*.md.

For each .md it:

reads any existing ms.assetid in the YAML (first 60 lines),

finds the snippet reference(s) in the page (e.g., Class1.vb#94 or docs/.../class94.vb#94),

locates the corresponding .vb file under samples/... or docs/samples/...,

extracts a UUID from the VB file (prefers ms.assetid: line, falls back to any UUID in comments),

reports what it would change (dry run) or updates the YAML to match the VB UUID (when --apply is used).

Honors --backup to save original .md files before editing.

Script: sync-assetids-bulk.sh Save this in the repo root, make executable (chmod +x sync-assetids-bulk.sh), run the dry run first.

#!/usr/bin/env bash
set -euo pipefail

# Config - adjust if your repo layout differs
MD_DIR="docs/visual-basic/language-reference/statements"
SEARCH_DIRS=("docs/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB" "samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrStatements/VB")
DRY_RUN=true
BACKUP=false

for arg in "$@"; do
  case "$arg" in
    --apply) DRY_RUN=false ;;
    --backup) BACKUP=true ;;
    *) echo "Unknown option: $arg"; echo "Usage: $0 [--apply] [--backup]"; exit 1 ;;
  esac
done

# Helpers
extract_md_assetid() {
  local md="$1"
  awk 'NR<=60 && tolower($0) ~ /^ms\.assetid:/ { print $0; exit }' "$md" \
    | sed -E 's/^[[:space:]]*ms\.assetid:[[:space:]]*//I' || true
}

find_vb_for_md() {
  local md="$1"
  local -n out=$2
  out=()

  # 1) look for explicit docs path classNN.vb
  while IFS= read -r match; do
    id="$(echo "$match" | sed -E 's/.*class([0-9]+)\.vb.*/\1/')"
    # prefer docs copy then samples
    for base in "${SEARCH_DIRS[@]}"; do
      candidate="$base/class${id}.vb"
      [ -f "$candidate" ] && out+=("$candidate")
    done
  done < <(grep -oE 'class[0-9]+\.vb#[0-9]+' "$md" 2>/dev/null || true)

  # 2) look for Class1.vb#N references
  while IFS= read -r match; do
    id="$(echo "$match" | sed -E 's/.*#([0-9]+).*/\1/')"
    # check docs then samples for classN.vb
    for base in "${SEARCH_DIRS[@]}"; do
      candidate="$base/class${id}.vb"
      if [ -f "$candidate" ]; then
        out+=("$candidate")
      else
        # fallback: Class1.vb (the snippet is embedded there)
        candidate2="$base/Class1.vb"
        [ -f "$candidate2" ] && out+=("$candidate2")
      fi
    done
  done < <(grep -oE 'Class1\.vb#[0-9]+' "$md" 2>/dev/null || true)

  # dedupe
  if [ ${#out[@]} -gt 1 ]; then
    # unique
    mapfile -t out < <(printf "%s\n" "${out[@]}" | awk '!seen[$0]++')
  fi
}

extract_uuid_from_vb() {
  local vb="$1"
  # prefer explicit ms.assetid: line
  uuid="$(grep -Eio 'ms\.assetid:[[:space:]]*[0-9a-fA-F-]{36}' "$vb" 2>/dev/null | head -n1 | sed -E 's/.*ms\.assetid:[[:space:]]*//I' || true)"
  if [ -n "$uuid" ]; then
    echo "$uuid"
    return
  fi
  # fallback: any UUID-like token
  uuid="$(grep -Eo '[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}' "$vb" 2>/dev/null | head -n1 || true)"
  echo "$uuid"
}

update_md_assetid() {
  local md="$1"
  local newid="$2"
  if [ "$BACKUP" = true ]; then
    cp -p "$md" "$md.bak"
  fi

  # If no YAML front matter, add one at top
  if ! awk 'NR<=30{if($0=="---") c++; if(c==2){print "yes"; exit}} END{if(c<2) exit 1}' "$md" 2>/dev/null; then
    tmp="$(mktemp)"
    printf '---\nms.assetid: %s\n---\n\n' "$newid" > "$tmp"
    cat "$md" >> "$tmp"
    mv "$tmp" "$md"
    echo "Inserted YAML with ms.assetid: $newid into $md"
    return
  fi

  # YAML exists: update or insert ms.assetid inside YAML
  existing="$(awk 'BEGIN{in=0} NR<=200{ if($0=="---"){ if(in==0){in=1; next} else {exit}} if(in==1 && tolower($0) ~ /^ms\.assetid:/){print $0; exit}}' "$md" || true)"
  if [ -n "$existing" ]; then
    perl -0777 -pe "s/(^---\\s*\\n.*?\\n)ms\\.assetid:[ \\t]*[0-9a-fA-F-]+(.*?\\n---)/\$1ms.assetid: $newid\$2/smi" -i "$md"
    echo "Updated ms.assetid in $md -> $newid"
  else
    perl -0777 -pe "s/^(---\\s*\\n)/\$1ms.assetid: $newid\\n/sm" -i "$md"
    echo "Inserted ms.assetid: $newid into YAML of $md"
  fi
}

# Main loop
printf "md_file,existing_ms_assetid,found_vb,found_uuid,action\n"
for md in "$MD_DIR"/*.md; do
  [ -f "$md" ] || continue
  existing="$(extract_md_assetid "$md" || true)"
  # find candidate vb files for this md
  declare -a vbs
  find_vb_for_md "$md" vbs

  if [ ${#vbs[@]} -eq 0 ]; then
    # no snippet reference found; nothing to sync
    printf "%s,%s,%s,%s,%s\n" "$md" "${existing:-(none)}" "(no-vb-found)" "(no-uuid)" "skip"
    continue
  fi

  # prefer the first vb that yields a UUID
  chosen_vb=""
  chosen_uuid=""
  for vb in "${vbs[@]}"; do
    uuid="$(extract_uuid_from_vb "$vb" || true)"
    if [ -n "$uuid" ]; then
      chosen_vb="$vb"
      chosen_uuid="$uuid"
      break
    fi
  done

  if [ -z "$chosen_uuid" ]; then
    printf "%s,%s,%s,%s,%s\n" "$md" "${existing:-(none)}" "$(printf "%s;" "${vbs[@]}")" "(no-uuid)" "skip"
    continue
  fi

  if [ -z "$existing" ]; then
    if $DRY_RUN; then
      printf "%s,%s,%s,%s,%s\n" "$md" "(none)" "$chosen_vb" "$chosen_uuid" "would-insert"
    else
      update_md_assetid "$md" "$chosen_uuid"
      printf "%s,%s,%s,%s,%s\n" "$md" "(none)" "$chosen_vb" "$chosen_uuid" "inserted"
    fi
    continue
  fi

  if [ "$existing" = "$chosen_uuid" ]; then
    printf "%s,%s,%s,%s,%s\n" "$md" "$existing" "$chosen_vb" "$chosen_uuid" "ok"
  else
    if $DRY_RUN; then
      printf "%s,%s,%s,%s,%s\n" "$md" "$existing" "$chosen_vb" "$chosen_uuid" "would-update"
    else
      update_md_assetid "$md" "$chosen_uuid"
      printf "%s,%s,%s,%s,%s\n" "$md" "$existing" "$chosen_vb" "$chosen_uuid" "updated"
    fi
  fi
done

if $DRY_RUN; then
  echo
  echo "Dry run complete. Re-run with --apply to write changes. Use --backup to keep .md.bak copies."
else
  echo
  echo "Apply complete. Review changes and commit on a branch."
fi

How to run (recommended) Dry run first (no changes):

chmod +x sync-assetids-bulk.sh
./sync-assetids-bulk.sh

Inspect the CSV-style output to see which files would be inserted or updated.

Apply with backups:

./sync-assetids-bulk.sh --apply --backup

This updates .md files in place and creates .md.bak backups.

Review and commit:

git checkout -b docs/sync-assetids
git add docs/visual-basic/language-reference/statements/*.md
git commit -m "docs: sync ms.assetid in statements pages from snippet files"
git push origin HEAD

Next steps I recommend Run the dry run and paste the output here if you want me to parse it and highlight which pages to prioritize.

If many pages would be updated, consider splitting into smaller PRs (e.g., 10–20 pages per PR) to make review easier.

After applying, run the docs validation/linter used by the repo (if available) before opening the PR. I have one it would look like here it only does header checking really

package main

import (
	"bufio"
	"bytes"
	"crypto/sha256"
	"encoding/hex"
	"encoding/json"
	"flag"
	"fmt"
	"io"
	"os"
	"path/filepath"
	"regexp"
	"sort"
	"strings"
)

var (
	flagLint                = flag.Bool("lint", false, "lint header only (read-only); exit 2 on issues")
	flagFix                 = flag.Bool("fix", false, "apply fixes (insert hash when missing / normalize per policy)")
	flagInitHeader          = flag.Bool("init-header", false, "when no header exists, create default header (# filename, # File Hash: )")
	flagEnsureHashInFirst15 = flag.Bool("ensure-hash-in-first15", false, "ensure canonical File Hash is within first 15 lines after fixes")
	flagDryRun              = flag.Bool("dry-run", false, "show changes but do not write files")
	flagVerbose             = flag.Bool("v", false, "verbose output")
	flagCheck               = flag.Bool("check", false, "Compare calculated hash with hash in file")
	flagInsert              = flag.Bool("insert", false, "Insert generated hashes into # File Hash: lines")
	flagGenEntireFile       = flag.Bool("gentirefile", false, "Generate hash for entire file without exclusions")
	flagCheckEntireFile     = flag.String("checkgentirefile", "", "Check hash against entire file content (provide expected hash)")
	flagBulkCheck           = flag.String("bulk-check", "", "Check hashes for multiple files from a glob pattern or file list")
	flagBulkInsert          = flag.String("bulk-insert", "", "Insert hashes into # File Hash: lines for multiple files from a glob pattern or file list")
	flagBulkGent            = flag.String("bulk-gent", "", "Generate entire file hashes for multiple files from a glob pattern or file list")
	flagBulkLint            = flag.String("bulk-lint", "", "Lint headers for multiple files from a glob pattern or file list")
	flagBulkFix             = flag.String("bulk-fix", "", "Fix headers for multiple files from a glob pattern or file list")
	flagJson                = flag.Bool("json", false, "Output results in JSON format")
	flagConfig              = flag.String("config", "", "Path to JSON config file (optional)")
	flagUsageJsonStdout     = flag.Bool("usage-json-stdout", false, "Output usage JSON to stdout instead of file")
)

var hashLinePattern = regexp.MustCompile(`^(#\s*)([^:\n]*?)(?i)(hash:)`)

var headerLineRe = regexp.MustCompile(`^#\s*(.*?):\s*(.*)$`)

type Config struct {
	DefaultHashLabel string `json:"default_hash_label"`
	JsonOutput       bool   `json:"json_output"`
	Verbose          bool   `json:"verbose"`
}

type Result struct {
	File           string `json:"file"`
	Status         string `json:"status"`
	CalculatedHash string `json:"calculated_hash,omitempty"`
	FileHash       string `json:"file_hash,omitempty"`
	Error          string `json:"error,omitempty"`
}

func loadConfig(path string) (*Config, error) {
	if path == "" {
		return &Config{DefaultHashLabel: "File Hash", JsonOutput: false, Verbose: false}, nil
	}
	f, err := os.Open(path)
	if err != nil {
		return nil, err
	}
	defer f.Close()
	var config Config
	if err := json.NewDecoder(f).Decode(&config); err != nil {
		return nil, err
	}
	return &config, nil
}

func outputResult(config *Config, result Result) {
	if config.JsonOutput || *flagJson {
		if data, err := json.Marshal(result); err == nil {
			fmt.Println(string(data))
		}
	} else {
		if result.Status == "OK" {
			fmt.Printf("OK: %s\n", result.File)
		} else if result.Status == "FAIL" {
			fmt.Printf("FAIL: %s\n", result.File)
			if result.CalculatedHash != "" {
				fmt.Printf("Calculated: %s\n", result.CalculatedHash)
			}
			if result.FileHash != "" {
				fmt.Printf("In file:    %s\n", result.FileHash)
			}
		} else if result.Error != "" {
			fmt.Printf("ERROR: %s - %s\n", result.File, result.Error)
		}
		if *flagVerbose || config.Verbose {
			fmt.Printf("Verbose: Processed %s\n", result.File)
		}
	}
}

func outputBulkResults(config *Config, results []Result) {
	if config.JsonOutput || *flagJson {
		if data, err := json.Marshal(results); err == nil {
			fmt.Println(string(data))
		}
	} else {
		for _, result := range results {
			outputResult(config, result)
			fmt.Println()
		}
	}
}

func readAllLines(path string) ([]string, error) {
	f, err := os.Open(path)
	if err != nil {
		return nil, err
	}
	defer f.Close()
	var lines []string
	scanner := bufio.NewScanner(f)
	for scanner.Scan() {
		lines = append(lines, scanner.Text())
	}
	if err := scanner.Err(); err != nil {
		return nil, err
	}
	return lines, nil
}

func readLinesWithNewline(path string) ([]string, error) {
	f, err := os.Open(path)
	if err != nil {
		return nil, err
	}
	defer f.Close()
	var lines []string
	scanner := bufio.NewScanner(f)
	for scanner.Scan() {
		lines = append(lines, scanner.Text()+"\n")
	}
	if err := scanner.Err(); err != nil {
		return nil, err
	}
	return lines, nil
}

func writeAtomic(path string, lines []string) error {
	dir := filepath.Dir(path)
	tmpfile, err := os.CreateTemp(dir, ".__tmp_hash_*")
	if err != nil {
		return err
	}
	tmpPath := tmpfile.Name()
	defer func() {
		tmpfile.Close()
		os.Remove(tmpPath)
	}()
	writer := bufio.NewWriter(tmpfile)
	for _, ln := range lines {
		_, err := writer.WriteString(ln)
		if err != nil {
			return err
		}
		if !strings.HasSuffix(ln, "\n") {
			_, err = writer.WriteString("\n")
			if err != nil {
				return err
			}
		}
	}
	if err := writer.Flush(); err != nil {
		return err
	}
	if err := tmpfile.Sync(); err != nil {
		// non-fatal on some platforms
	}
	if err := tmpfile.Close(); err != nil {
		return err
	}
	// preserve mode
	info, err := os.Stat(path)
	var mode os.FileMode = 0644
	if err == nil {
		mode = info.Mode()
	}
	if err := os.Chmod(tmpPath, mode); err != nil {
		// best-effort
	}
	if err := os.Rename(tmpPath, path); err != nil {
		return err
	}
	return nil
}

// computeSHA256Hex computes SHA256 of entire file
func computeSHA256Hex(path string) (string, error) {
	f, err := os.Open(path)
	if err != nil {
		return "", err
	}
	defer f.Close()
	h := sha256.New()
	if _, err := io.Copy(h, f); err != nil {
		return "", err
	}
	return hex.EncodeToString(h.Sum(nil)), nil
}

// sha256OfString computes SHA256 of a string
func sha256OfString(s string) string {
	h := sha256.Sum256([]byte(s))
	return hex.EncodeToString(h[:])
}

// calculateFileHash computes hash excluding header lines (first 15 lines checked for hash line)
// Returns: calculatedHash, hashInFile, error
func calculateFileHash(path string) (string, string, error) {
	lines, err := readLinesWithNewline(path)
	if err != nil {
		return "", "", err
	}

	hashIndex := -1
	hashValue := ""
	limit := 15
	if len(lines) < limit {
		limit = len(lines)
	}
	for i := 0; i < limit; i++ {
		if hashLinePattern.MatchString(lines[i]) {
			hashIndex = i
			after := strings.SplitN(lines[i], ":", 2)
			if len(after) > 1 {
				hashValue = strings.TrimSpace(after[1])
				if len(hashValue) > 64 {
					hashValue = hashValue[:64]
				}
			}
			break
		}
	}

	if hashIndex == -1 {
		return "", "", nil // no hash line found
	}

	var b strings.Builder
	for i := 0; i < len(lines); i++ {
		if i == hashIndex {
			continue
		}
		b.WriteString(lines[i])
	}
	calc := sha256OfString(b.String())
	return calc, hashValue, nil
}

// parseTopHeader finds contiguous header block from top. Returns header entries and headerEnd index (exclusive).
func parseTopHeader(lines []string) ([]HeaderEntry, int) {
	var entries []HeaderEntry
	end := 0
	for i, ln := range lines {
		if headerLineRe.MatchString(ln) {
			m := headerLineRe.FindStringSubmatch(ln)
			field := strings.TrimSpace(m[1])
			value := m[2]
			entries = append(entries, HeaderEntry{
				Line:  i,
				Field: field,
				Value: value,
				Raw:   ln,
			})
			end = i + 1
			continue
		}
		// stop at first non-header line
		break
	}
	return entries, end
}

func hasHashField(entries []HeaderEntry) bool {
	for _, e := range entries {
		if strings.Contains(strings.ToLower(e.Field), "hash") {
			return true
		}
	}
	return false
}

func findFirstHashIndex(entries []HeaderEntry) (int, *HeaderEntry) {
	for _, e := range entries {
		if strings.Contains(strings.ToLower(e.Field), "hash") {
			return e.Line, &e
		}
	}
	return -1, nil
}

func normalizeHeaderLine(field, value string) string {
	// preserve field exact text, normalize spacing
	return fmt.Sprintf("# %s: %s", strings.TrimSpace(field), strings.TrimSpace(value))
}

func smallHeaderDiff(oldLines, newLines []string, headerEndBefore, headerEndAfter int) string {
	var buf bytes.Buffer
	// show only first 20 lines around header region
	limit := headerEndAfter + 3
	if limit > len(newLines) {
		limit = len(newLines)
	}
	for i := 0; i < limit && i < len(newLines); i++ {
		old := ""
		if i < len(oldLines) {
			old = oldLines[i]
		}
		new := newLines[i]
		if old != new {
			buf.WriteString(fmt.Sprintf("-%3d: %s\n", i+1, old))
			buf.WriteString(fmt.Sprintf("+%3d: %s\n", i+1, new))
		}
	}
	return buf.String()
}

func ensureCanonicalFileHashInserted(path string, lines []string, entries []HeaderEntry, headerEnd int, dryRun bool) ([]string, bool, string, error) {
	// returns newLines, changed, summary, error
	changed := false
	originalLines := make([]string, len(lines))
	copy(originalLines, lines)

	// compute current headerCount and headerEnd (provided)
	headerCount := len(entries)
	// compute file hash
	hashHex, err := computeSHA256Hex(path)
	if err != nil {
		return lines, false, "", err
	}
	canonical := fmt.Sprintf("# File Hash: %s", hashHex)

	if headerCount == 0 {
		// prepend default header: # filename: <basename> ; # File Hash: <hash>
		basename := filepath.Base(path)
		newHeader := []string{
			fmt.Sprintf("# filename: %s", basename),
			canonical,
		}
		// insert at top
		newLines := make([]string, 0, len(lines)+len(newHeader))
		for _, h := range newHeader {
			newLines = append(newLines, h)
		}
		newLines = append(newLines, lines...)
		if !dryRun {
			if err := writeAtomic(path, newLines); err != nil {
				return lines, false, "", err
			}
		}
		changed = true
		summary := fmt.Sprintf("prepended default header; inserted File Hash at line 2")
		return newLines, changed, summary, nil
	}

	// header exists but has no hash -> insert at end of header block
	insertAt := headerEnd // after last header line (headerEnd is exclusive)
	// special-case: if header has > 15 header entries and ensure-hash-in-first15 behavior desired,
	// calling context may handle that. Here we just insert at end-of-header per final rule.
	// insert canonical line at insertAt index
	newLines := make([]string, 0, len(lines)+1)
	newLines = append(newLines, lines[:insertAt]...)
	newLines = append(newLines, canonical)
	newLines = append(newLines, lines[insertAt:]...)
	if !dryRun {
		if err := writeAtomic(path, newLines); err != nil {
			return lines, false, "", err
		}
	}
	changed = true
	summary := fmt.Sprintf("inserted File Hash at line %d (after header block)", insertAt+1)
	return newLines, changed, summary, nil
}

func normalizeHeaderFormatting(lines []string, headerEnd int) ([]string, bool) {
	changed := false
	newLines := make([]string, len(lines))
	copy(newLines, lines)
	for i := 0; i < headerEnd && i < len(lines); i++ {
		ln := lines[i]
		if headerLineRe.MatchString(ln) {
			m := headerLineRe.FindStringSubmatch(ln)
			field := m[1]
			value := m[2]
			n := normalizeHeaderLine(field, value)
			if n != ln {
				newLines[i] = n
				changed = true
			}
		} else if strings.HasPrefix(strings.TrimSpace(ln), "#") {
			// Accept non-standard header like "#Field: x" or "#  Field: x"
			trim := strings.TrimPrefix(strings.TrimSpace(ln), "#")
			parts := strings.SplitN(trim, ":", 2)
			if len(parts) == 2 {
				field := strings.TrimSpace(parts[0])
				value := strings.TrimSpace(parts[1])
				n := normalizeHeaderLine(field, value)
				if n != ln {
					newLines[i] = n
					changed = true
				}
			}
		}
	}
	return newLines, changed
}

func expandPatternOrList(arg string) ([]string, error) {
	if strings.ContainsAny(arg, "*?[]") {
		matches, err := filepath.Glob(arg)
		if err != nil {
			return nil, err
		}
		sort.Strings(matches)
		if len(matches) == 0 {
			return nil, fmt.Errorf("no files matching pattern '%s'", arg)
		}
		return matches, nil
	}
	// treat as file with list of paths
	f, err := os.Open(arg)
	if err != nil {
		return nil, err
	}
	defer f.Close()
	var out []string
	sc := bufio.NewScanner(f)
	for sc.Scan() {
		line := strings.TrimSpace(sc.Text())
		if line != "" {
			out = append(out, line)
		}
	}
	if err := sc.Err(); err != nil {
		return nil, err
	}
	if len(out) == 0 {
		return nil, fmt.Errorf("no files found in list")
	}
	return out, nil
}

func processSingleFile(path string, config *Config) (int, error) {
	if _, err := os.Stat(path); os.IsNotExist(err) {
		result := Result{File: path, Status: "ERROR", Error: "file not found"}
		outputResult(config, result)
		return 1, fmt.Errorf("file '%s' not found", path)
	}

	lines, err := readAllLines(path)
	if err != nil {
		result := Result{File: path, Status: "ERROR", Error: err.Error()}
		outputResult(config, result)
		return 1, err
	}
	entries, headerEnd := parseTopHeader(lines)
	hashExists := hasHashField(entries)
	_, firstHashEntry := findFirstHashIndex(entries)

	// genentirefile: hash entire file
	if *flagGenEntireFile {
		h, err := computeSHA256Hex(path)
		if err != nil {
			result := Result{File: path, Status: "ERROR", Error: err.Error()}
			outputResult(config, result)
			return 1, err
		}
		result := Result{File: path, Status: "OK", CalculatedHash: h}
		outputResult(config, result)
		return 0, nil
	}

	// checkgentirefile: verify entire file hash
	if *flagCheckEntireFile != "" {
		h, err := computeSHA256Hex(path)
		if err != nil {
			result := Result{File: path, Status: "ERROR", Error: err.Error()}
			outputResult(config, result)
			return 1, err
		}
		expected := strings.TrimSpace(*flagCheckEntireFile)
		status := "OK"
		if h != expected {
			status = "FAIL"
		}
		result := Result{File: path, Status: status, CalculatedHash: h, FileHash: expected}
		outputResult(config, result)
		return 0, nil
	}

	// Header-based operations (lint, fix, etc.)
	// lint: check header for issues (read-only)
	if *flagLint {
		if hashExists {
			result := Result{File: path, Status: "OK", FileHash: firstHashEntry.Value}
			outputResult(config, result)
			return 0, nil
		} else {
			result := Result{File: path, Status: "FAIL", Error: "no hash field in header"}
			outputResult(config, result)
			return 2, nil
		}
	}

	// Fix operations
	if *flagFix || *flagInitHeader || *flagEnsureHashInFirst15 {
		// If init-header requested and no header, create header and insert hash
		if *flagInitHeader && len(entries) == 0 {
			newLines, changed, summary, err := ensureCanonicalFileHashInserted(path, lines, entries, headerEnd, *flagDryRun)
			if err != nil {
				result := Result{File: path, Status: "ERROR", Error: err.Error()}
				outputResult(config, result)
				return 1, err
			}
			if changed {
				result := Result{File: path, Status: "FIXED", Error: summary}
				outputResult(config, result)
				if *flagDryRun {
					fmt.Println("DRY-RUN: no write performed")
					fmt.Println(smallHeaderDiff(lines, newLines, headerEnd, headerEnd+2))
					return 0, nil
				}
				return 0, nil
			}
			result := Result{File: path, Status: "NO-OP", Error: "init-header produced no change"}
			outputResult(config, result)
			return 0, nil
		}

		// If header exists and no hash-like field -> insert at end of header block
		if len(entries) > 0 && !hashExists {
			newLines, changed, summary, err := ensureCanonicalFileHashInserted(path, lines, entries, headerEnd, *flagDryRun)
			if err != nil {
				result := Result{File: path, Status: "ERROR", Error: err.Error()}
				outputResult(config, result)
				return 1, err
			}
			if changed {
				result := Result{File: path, Status: "FIXED", Error: summary}
				outputResult(config, result)
				if *flagDryRun {
					fmt.Println("DRY-RUN: no write performed")
					fmt.Println(smallHeaderDiff(lines, newLines, headerEnd, headerEnd+1))
					return 0, nil
				}
				return 0, nil
			}
			result := Result{File: path, Status: "NO-OP", Error: "no hash and ensureCanonical produced no change"}
			outputResult(config, result)
			return 0, nil
		}

		// If header exists and there is a hash-like field:
		// normalize formatting for header lines; optionally ensure File Hash in first 15 if requested
		newLines := make([]string, len(lines))
		copy(newLines, lines)
		changed := false
		normed, nChanged := normalizeHeaderFormatting(lines, headerEnd)
		if nChanged {
			changed = true
			newLines = normed
		}
		// If ensure-hash-in-first15 requested and canonical File Hash is not within first 15, insert File Hash at line 10
		if *flagEnsureHashInFirst15 {
			if firstHashEntry != nil {
				hashLine := firstHashEntry.Line
				if hashLine >= 15 {
					hashHex, err := computeSHA256Hex(path)
					if err != nil {
						result := Result{File: path, Status: "ERROR", Error: err.Error()}
						outputResult(config, result)
						return 1, err
					}
					canonical := fmt.Sprintf("# File Hash: %s", hashHex)
					insertAt := 9
					if insertAt > len(newLines) {
						insertAt = len(newLines)
					}
					tmp := make([]string, 0, len(newLines)+1)
					tmp = append(tmp, newLines[:insertAt]...)
					tmp = append(tmp, canonical)
					tmp = append(tmp, newLines[insertAt:]...)
					if !*flagDryRun {
						if err := writeAtomic(path, tmp); err != nil {
							result := Result{File: path, Status: "ERROR", Error: err.Error()}
							outputResult(config, result)
							return 1, err
						}
					}
					changed = true
					result := Result{File: path, Status: "FIXED", Error: "inserted File Hash at line 10 to ensure discoverability"}
					outputResult(config, result)
					if *flagDryRun {
						fmt.Println("DRY-RUN: no write performed")
						fmt.Println(smallHeaderDiff(lines, tmp, headerEnd, headerEnd+1))
						return 0, nil
					}
					return 0, nil
				}
			}
		}

		if changed {
			if *flagDryRun {
				result := Result{File: path, Status: "DRY-RUN", Error: "would normalize header formatting"}
				outputResult(config, result)
				fmt.Println(smallHeaderDiff(lines, newLines, headerEnd, headerEnd))
				return 0, nil
			}
			if err := writeAtomic(path, newLines); err != nil {
				result := Result{File: path, Status: "ERROR", Error: err.Error()}
				outputResult(config, result)
				return 1, err
			}
			result := Result{File: path, Status: "FIXED", Error: "normalized header formatting"}
			outputResult(config, result)
			return 0, nil
		}

		// nothing to do
		result := Result{File: path, Status: "NO-OP", Error: "header already fine"}
		outputResult(config, result)
		return 0, nil
	}

	// insert: write hash to file (legacy from first file, now integrated above)
	if *flagInsert {
		calc, _, err := calculateFileHash(path)
		if err != nil {
			result := Result{File: path, Status: "ERROR", Error: err.Error()}
			outputResult(config, result)
			return 1, err
		}
		if calc == "" {
			result := Result{File: path, Status: "SKIP", Error: "no hash line found"}
			outputResult(config, result)
			return 1, nil
		}
		linesWithNL, err := readLinesWithNewline(path)
		if err != nil {
			result := Result{File: path, Status: "ERROR", Error: err.Error()}
			outputResult(config, result)
			return 1, err
		}
		hashIndex := -1
		for i, ln := range linesWithNL {
			if hashLinePattern.MatchString(ln) {
				hashIndex = i
				break
			}
		}
		if hashIndex == -1 {
			result := Result{File: path, Status: "SKIP", Error: "no hash line found in first 15 lines"}
			outputResult(config, result)
			return 1, nil
		}
		m := hashLinePattern.FindStringSubmatch(linesWithNL[hashIndex])
		var newLine string
		if len(m) >= 3 {
			prefixBody := strings.TrimSpace(m[2])
			label := "hash:"
			if m[3] != "" {
				label = m[3]
			}
			if prefixBody != "" {
				newLine = fmt.Sprintf("# %s %s %s\n", prefixBody, label, calc)
			} else {
				newLine = fmt.Sprintf("# %s %s\n", label, calc)
			}
		} else {
			newLine = fmt.Sprintf("# File Hash: %s\n", calc)
		}
		linesWithNL[hashIndex] = newLine
		if !*flagDryRun {
			if err := writeAtomic(path, linesWithNL); err != nil {
				result := Result{File: path, Status: "ERROR", Error: err.Error()}
				outputResult(config, result)
				return 1, err
			}
		}
		result := Result{File: path, Status: "OK", Error: "hash inserted"}
		outputResult(config, result)
		if *flagDryRun {
			fmt.Printf("DRY-RUN: Would write to %s\n", path)
		}
		return 0, nil
	}

	// check: verify hash (legacy from first file)
	if *flagCheck {
		calc, inFile, err := calculateFileHash(path)
		if err != nil {
			result := Result{File: path, Status: "ERROR", Error: err.Error()}
			outputResult(config, result)
			return 1, err
		}
		if inFile == "" {
			result := Result{File: path, Status: "ERROR", Error: "no hash found to compare"}
			outputResult(config, result)
			return 1, nil
		}
		status := "OK"
		if calc != inFile {
			status = "FAIL"
		}
		result := Result{File: path, Status: status, CalculatedHash: calc, FileHash: inFile}
		outputResult(config, result)
		return 0, nil
	}

	// default: print calculated hash (excluding header)
	calc, _, err := calculateFileHash(path)
	if err != nil {
		result := Result{File: path, Status: "ERROR", Error: err.Error()}
		outputResult(config, result)
		return 1, err
	}
	if calc != "" {
		result := Result{File: path, Status: "OK", CalculatedHash: calc}
		outputResult(config, result)
		return 0, nil
	}

	// no hash line found
	result := Result{File: path, Status: "ERROR", Error: "no hash line found"}
	outputResult(config, result)
	return 1, nil
}

func processBulk(pattern string, mode string, extra string, config *Config) int {
	paths, err := expandPatternOrList(pattern)
	if err != nil {
		result := Result{Status: "ERROR", Error: err.Error()}
		outputResult(config, result)
		return 1
	}
	fmt.Printf("Processing %d file(s)...\n\n", len(paths))
	var results []Result
	exitCodes := []int{}
	for _, p := range paths {
		// Set flags per mode
		switch mode {
		case "check":
			*flagCheck = true
			*flagInsert = false
			*flagGenEntireFile = false
			*flagCheckEntireFile = ""
			*flagLint = false
			*flagFix = false
		case "insert":
			*flagInsert = true
			*flagCheck = false
			*flagGenEntireFile = false
			*flagCheckEntireFile = ""
			*flagLint = false
			*flagFix = false
		case "gent":
			*flagGenEntireFile = true
			*flagInsert = false
			*flagCheck = false
			*flagCheckEntireFile = ""
			*flagLint = false
			*flagFix = false
		case "checkgent":
			*flagCheckEntireFile = extra
			*flagGenEntireFile = false
			*flagInsert = false
			*flagCheck = false
			*flagLint = false
			*flagFix = false
		case "lint":
			*flagLint = true
			*flagCheck = false
			*flagInsert = false
			*flagGenEntireFile = false
			*flagCheckEntireFile = ""
			*flagFix = false
		case "fix":
			*flagFix = true
			*flagLint = false
			*flagCheck = false
			*flagInsert = false
			*flagGenEntireFile = false
			*flagCheckEntireFile = ""
		}
		code, err := processSingleFile(p, config)
		if err != nil {
			// error already output
		}
		exitCodes = append(exitCodes, code)
		results = append(results, Result{File: p, Status: "PROCESSED"}) // placeholder for bulk JSON
		fmt.Println()
	}
	outputBulkResults(config, results)
	max := 0
	for _, v := range exitCodes {
		if v > max {
			max = v
		}
	}
	return max
}

func generateUsageJSON() {
	usage := map[string]interface{}{
		"script_name": "hashheader.go",
		"purpose":     "Calculate, verify, insert, and manage SHA-256 hashes for markdown and source files with header metadata",
		"generated_at": "RFC3339",
		"examples": []map[string]interface{}{
			{
				"description": "Calculate hash (excluding File Hash: line)",
				"command":     "go run scripts/hashheader/hashheader.go file.md",
				"flags":       map[string]string{},
				"use_case":    "Get hash value for updating File Hash: field",
			},
			{
				"description": "Check if File Hash: matches calculated hash",
				"command":     "go run scripts/hashheader/hashheader.go -check file.md",
				"flags":       map[string]string{"check": "true"},
				"use_case":    "Verify file integrity - ensure file hasn't been modified",
			},
			{
				"description": "Insert calculated hash into File Hash: line",
				"command":     "go run scripts/hashheader/hashheader.go -insert file.md",
				"flags":       map[string]string{"insert": "true"},
				"use_case":    "Automatically update File Hash: field with correct hash",
			},
			{
				"description": "Generate hash of entire file (no exclusions)",
				"command":     "go run scripts/hashheader/hashheader.go -gentirefile file.go",
				"flags":       map[string]string{"gentirefile": "true"},
				"use_case":    "Calculate hash of complete file including all header lines",
			},
			{
				"description": "JSON output mode - machine readable",
				"command":     "go run scripts/hashheader/hashheader.go -json -check file.md",
				"flags":       map[string]string{"json": "true", "check": "true"},
				"use_case":    "AI agents get structured JSON output for programmatic processing",
			},
			{
				"description": "Lint header for policy compliance",
				"command":     "go run scripts/hashheader/hashheader.go -lint file.md",
				"flags":       map[string]string{"lint": "true"},
				"use_case":    "Check if header follows required format and structure",
			},
			{
				"description": "Fix header issues automatically",
				"command":     "go run scripts/hashheader/hashheader.go -fix file.md",
				"flags":       map[string]string{"fix": "true"},
				"use_case":    "Automatically correct header format and hash issues",
			},
			{
				"description": "Bulk check multiple files",
				"command":     "go run scripts/hashheader/hashheader.go -bulk-check 'docs/**/*.md'",
				"flags":       map[string]string{"bulk-check": "docs/**/*.md"},
				"use_case":    "Verify hashes for all markdown files in directory tree",
			},
			{
				"description": "Bulk insert hashes with JSON output",
				"command":     "go run scripts/hashheader/hashheader.go -json -bulk-insert 'docs/**/*.md'",
				"flags":       map[string]string{"json": "true", "bulk-insert": "docs/**/*.md"},
				"use_case":    "Update all markdown files with correct hashes, get structured results",
			},
			{
				"description": "Dry-run mode - preview changes without writing",
				"command":     "go run scripts/hashheader/hashheader.go -dry-run -insert file.md",
				"flags":       map[string]string{"dry-run": "true", "insert": "true"},
				"use_case":    "See what changes would be made without modifying files",
			},
		},
		"workflow": map[string]interface{}{
			"title": "Complete Hash Management Workflow",
			"steps": []map[string]interface{}{
				{
					"step_number": 1,
					"title":       "Calculate hash for new file",
					"command":     "go run scripts/hashheader/hashheader.go file.md",
					"purpose":     "Get hash value to insert into File Hash: field",
				},
				{
					"step_number": 2,
					"title":       "Insert hash into file",
					"command":     "go run scripts/hashheader/hashheader.go -insert file.md",
					"purpose":     "Automatically update File Hash: field with calculated hash",
				},
				{
					"step_number": 3,
					"title":       "Verify hash is correct",
					"command":     "go run scripts/hashheader/hashheader.go -check file.md",
					"purpose":     "Confirm File Hash: field matches calculated hash",
				},
				{
					"step_number": 4,
					"title":       "Batch process all files",
					"command":     "go run scripts/hashheader/hashheader.go -json -bulk-check 'docs/**/*.md'",
					"purpose":     "Verify all documentation files have correct hashes",
				},
			},
		},
		"json_output_examples": map[string]interface{}{
			"single_file_success": map[string]interface{}{
				"file":            "file.md",
				"status":          "OK",
				"calculated_hash": "abc123...",
			},
			"check_mode_success": map[string]interface{}{
				"file":            "file.md",
				"status":          "OK",
				"calculated_hash": "abc123...",
				"file_hash":       "abc123...",
			},
			"check_mode_mismatch": map[string]interface{}{
				"file":            "file.md",
				"status":          "ERROR",
				"error":           "hash mismatch",
				"calculated_hash": "abc123...",
				"file_hash":       "xyz789...",
			},
			"entire_file_hash": map[string]interface{}{
				"file":            "file.go",
				"status":          "OK",
				"calculated_hash": "def456...",
			},
		},
		"integration_with_doc_templates": map[string]string{
			"usage_in_agentbest_md":  "go run scripts/hashheader/hashheader.go <template>.md",
			"usage_in_next_steps":    "go run scripts/hashheader/hashheader.go <template> || python scripts/calculate_hash.py <template>",
			"entire_file_hash_usage": "go run scripts/hashheader/hashheader.go -gentirefile <source>.go",
		},
	}
	
	jsonBytes, err := json.MarshalIndent(usage, "", "  ")
	if err != nil {
		fmt.Fprintf(os.Stderr, "Error generating usage JSON: %v\n", err)
		os.Exit(1)
	}
	
	// Output to stdout or file based on flag
	if *flagUsageJsonStdout {
		// Output to stdout for agents to consume directly
		fmt.Println("========== USAGE JSON START ==========")
		fmt.Println(string(jsonBytes))
		fmt.Println("========== USAGE JSON END ==========")
	} else {
		// Write to file
		outPath := "hashheader_usage.json"
		if err := os.WriteFile(outPath, jsonBytes, 0644); err != nil {
			fmt.Fprintf(os.Stderr, "Error writing usage JSON file: %v\n", err)
			os.Exit(1)
		}
		
		fmt.Printf("✅ Generated usage JSON: %s\n\n", outPath)
		fmt.Println("This JSON file provides:")
		fmt.Println("  • 10 usage examples with commands and flags")
		fmt.Println("  • Complete 4-step workflow for hash management")
		fmt.Println("  • JSON output structure examples")
		fmt.Println("  • Integration guidance with doc_template scripts")
		fmt.Println("\nAI agents can read this file to understand all hashheader.go capabilities.")
		fmt.Println("\n💡 TIP: Use -usage-json-stdout to output JSON to stdout for direct consumption")
	}
}

func usageAndExit() {
	fmt.Fprintf(os.Stderr, "Usage: %s <filepath> [flags]\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "\nSingle File Mode:\n")
	fmt.Fprintf(os.Stderr, "  %s file.go              - Calculate hash (excluding header)\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -check file.go       - Verify hash matches header\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -insert file.go      - Insert generated hashes into header\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -gentirefile file.go - Calculate hash of entire file\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -lint file.go        - Lint header for issues\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -fix file.go         - Fix header issues\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "\nBulk Mode:\n")
	fmt.Fprintf(os.Stderr, "  %s -bulk-check '*.go'   - Check hashes for multiple files\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -bulk-insert '*.go'  - Insert hashes into multiple files\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -bulk-gent '*.go'    - Generate hashes for multiple files\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -bulk-lint '*.go'    - Lint headers for multiple files\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -bulk-fix '*.go'     - Fix headers for multiple files\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "\nAI-Friendly Options:\n")
	fmt.Fprintf(os.Stderr, "  %s -json -bulk-check '*.md'  - Output bulk results in JSON\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "  %s -config .hashconfig.json file.go  - Load settings from JSON config\n", os.Args[0])
	fmt.Fprintf(os.Stderr, "\n💡 TIP: Run without arguments to generate hashheader_usage.json with examples\n")
	fmt.Fprintf(os.Stderr, "\nFlags:\n")
	flag.PrintDefaults()
	os.Exit(1)
}

func main() {
	flag.Parse()
	config, err := loadConfig(*flagConfig)
	if err != nil {
		fmt.Fprintf(os.Stderr, "Error loading config: %v\n", err)
		os.Exit(1)
	}

	// Mutually exclusive checks
	opCount := 0
	if *flagCheck {
		opCount++
	}
	if *flagInsert {
		opCount++
	}
	if *flagGenEntireFile {
		opCount++
	}
	if *flagCheckEntireFile != "" {
		opCount++
	}
	if *flagLint {
		opCount++
	}
	if *flagFix {
		opCount++
	}
	if opCount > 1 {
		fmt.Fprintln(os.Stderr, "Error: --check, --insert, --gentirefile, --checkgentirefile, --lint, and --fix are mutually exclusive.")
		os.Exit(1)
	}

	// Bulk flags
	bulkCount := 0
	if *flagBulkCheck != "" {
		bulkCount++
	}
	if *flagBulkInsert != "" {
		bulkCount++
	}
	if *flagBulkGent != "" {
		bulkCount++
	}
	if *flagBulkLint != "" {
		bulkCount++
	}
	if *flagBulkFix != "" {
		bulkCount++
	}

	// bulk-checkgent via manual args (pattern hash)
	bulkCheckGentPattern := ""
	bulkCheckGentHash := ""
	for i := 1; i < len(os.Args)-1; i++ {
		if os.Args[i] == "--bulk-checkgent" {
			bulkCheckGentPattern = os.Args[i+1]
			bulkCheckGentHash = os.Args[i+2]
			bulkCount++
			break
		}
	}

	if bulkCount > 1 {
		fmt.Fprintln(os.Stderr, "Error: Bulk flags are mutually exclusive.")
		os.Exit(1)
	} else if bulkCount == 1 {
		if *flagBulkCheck != "" {
			os.Exit(processBulk(*flagBulkCheck, "check", "", config))
		} else if *flagBulkInsert != "" {
			os.Exit(processBulk(*flagBulkInsert, "insert", "", config))
		} else if *flagBulkGent != "" {
			os.Exit(processBulk(*flagBulkGent, "gent", "", config))
		} else if *flagBulkLint != "" {
			os.Exit(processBulk(*flagBulkLint, "lint", "", config))
		} else if *flagBulkFix != "" {
			os.Exit(processBulk(*flagBulkFix, "fix", "", config))
		} else if bulkCheckGentPattern != "" {
			os.Exit(processBulk(bulkCheckGentPattern, "checkgent", bulkCheckGentHash, config))
		}
	}

	// Single-file mode
	if flag.NArg() < 1 {
		// If no arguments and no bulk operations, generate usage JSON
		if bulkCount == 0 {
			generateUsageJSON()
			os.Exit(0)
		}
		usageAndExit()
	}

	filepathArg := flag.Arg(0)
	code, err := processSingleFile(filepathArg, config)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
		os.Exit(1)
	}
	os.Exit(code)
}

type HeaderEntry struct {
	Line  int
	Field string
	Value string
	Raw   string
}

terry-teppo avatar Nov 24 '25 22:11 terry-teppo