# USEE Verifier Guide

## Automated Validation System for Pieces

**Version 1.0**

> **Note on vocabulary:** USEE is a Spanish-originated protocol. File names, field names, CLI arguments, category values, error identifiers, and verification state names shown in code blocks (`PIEZA.usee`, `nombre`, `ejecutar`, `--ayuda`, `aprobado`, `rechazado`, `si`, `no`, etc.) are part of the normative standard and must be used as-is, the same way HTML tags like `<div>` are not translated. The surrounding prose is in English; the identifiers are not.

---

## Purpose

The USEE Verifier is the guardian of the protocol. Its job is to guarantee that every published piece complies with the defined standards, protecting both the users who rely on pieces and the creators who build on top of them.

### Verifier Principles

| Principle | Application |
|-----------|------------|
| **Useful** | Only checks things that affect real functionality |
| **Simple** | Anyone can understand why a piece passed or failed |
| **Essential** | Verifies protocol requirements, not aesthetic preferences |
| **Enduring** | Criteria do not change arbitrarily |

### Guarantees

A piece that passes verification is guaranteed to:

1. Contain all mandatory files
2. Have complete and valid metadata
3. Run correctly against the example input
4. Pass every bundled test
5. Respect the USEE communication format

---

## Verification Levels

The verifier evaluates pieces across three progressive levels:

```
Level 1: Structure
    ↓
Level 2: Functionality
    ↓
Level 3: Quality
```

### Level 1: Structure

Verifies that the piece has the correct shape.

- Mandatory files present
- Valid metadata format
- Complete documentation

**Outcome:** The piece exists and is well-formed.

### Level 2: Functionality

Verifies that the piece works.

- The executable responds
- It processes the example input correctly
- Every test passes
- Errors are handled appropriately

**Outcome:** The piece does what it says it does.

### Level 3: Quality

Verifies that the piece meets quality standards.

- Acceptable response time
- Clear and complete documentation
- Informative error handling
- Adherence to USEE conventions

**Outcome:** The piece is reliable for production use.

---

## Level 1 Checks: Structure

### 1.1 Mandatory Files

| Check | Criterion |
|--------------|----------|
| `PIEZA.usee` exists | The file must exist at the root |
| `LEEME.md` exists | The file must exist at the root |
| `ENTRADA.ejemplo` exists | The file must exist at the root |
| `SALIDA.ejemplo` exists | The file must exist at the root |
| `ejecutar` exists | The file must exist at the root |
| `ejecutar` is executable | Execution permissions are active |
| `pruebas/` exists | The directory must exist |
| Minimum tests present | At least 4 test cases |

**Verification code:**

```
verify_mandatory_files(piece_path):
    required_files = [
        "PIEZA.usee",
        "LEEME.md",
        "ENTRADA.ejemplo",
        "SALIDA.ejemplo",
        "ejecutar"
    ]
    
    for each file in required_files:
        if not exists(piece_path + "/" + file):
            fail("Mandatory file missing: " + file)
    
    if not is_executable(piece_path + "/ejecutar"):
        fail("The 'ejecutar' file does not have execution permissions")
    
    if not directory_exists(piece_path + "/pruebas"):
        fail("'pruebas/' directory not found")
    
    test_cases = count_test_cases(piece_path + "/pruebas")
    if test_cases < 4:
        fail("At least 4 test cases are required, found: " + test_cases)
```

### 1.2 PIEZA.usee Metadata

#### Mandatory Fields

| Field | Validation |
|-------|------------|
| `nombre` | Non-empty, lowercase letters and hyphens only |
| `version` | X.Y.Z format |
| `creador` | Valid email format |
| `fecha_creacion` | YYYY-MM-DD format |
| `descripcion_corta` | Non-empty, maximum 100 characters |
| `descripcion_larga` | Non-empty |
| `categoria` | Value from a predefined list |
| `accion` | Non-empty, "verb noun" format |
| `entrada_descripcion` | Non-empty |
| `salida_descripcion` | Non-empty |
| `lenguaje` | Non-empty |
| `costo_por_uso` | Number >= 0 |
| `moneda` | ISO 4217 code (e.g., usd, mxn, eur) |
| `modelo_cobro` | Valid value: por_llamada, por_minuto, por_registro |

**Valid categories:**

```
autenticacion
almacenamiento
comunicacion
datos
documentos
imagenes
integracion
pagos
reportes
seguridad
utilidades
otro
```

**Verification code:**

```
verify_metadata(piece_path):
    content = read_ftu(piece_path + "/PIEZA.usee")
    
    mandatory_fields = [
        "nombre", "version", "creador", "fecha_creacion",
        "descripcion_corta", "descripcion_larga", "categoria",
        "accion", "entrada_descripcion", "salida_descripcion",
        "lenguaje", "costo_por_uso", "moneda", "modelo_cobro"
    ]
    
    for each field in mandatory_fields:
        if field not in content:
            fail("Mandatory field missing in PIEZA.usee: " + field)
        if content[field] is empty:
            fail("Mandatory field is empty in PIEZA.usee: " + field)
    
    # Specific validations
    if not matches(content["nombre"], "^[a-z][a-z0-9-]*$"):
        fail("Invalid name: only lowercase letters, digits, and hyphens")
    
    if not matches(content["version"], "^[0-9]+\.[0-9]+\.[0-9]+$"):
        fail("Invalid version: must follow X.Y.Z format")
    
    if not is_valid_email(content["creador"]):
        fail("Creator must be a valid email address")
    
    if length(content["descripcion_corta"]) > 100:
        fail("Short description exceeds 100 characters")
    
    if content["categoria"] not in VALID_CATEGORIES:
        fail("Invalid category: " + content["categoria"])
    
    if not is_number(content["costo_por_uso"]) or number(content["costo_por_uso"]) < 0:
        fail("Cost per use must be a number >= 0")
```

### 1.3 LEEME.md Documentation

#### Mandatory Sections

| Section | Identification |
|---------|----------------|
| Title | Line starting with `# ` |
| What It Does | `## Qué Hace` heading |
| What It Does Not Do | `## Qué No Hace` heading |
| Quick Start | `## Uso Rápido` heading |
| Input | `## Entrada` heading |
| Output | `## Salida` heading |
| Common Errors | `## Errores Comunes` heading |

**Verification code:**

```
verify_documentation(piece_path):
    content = read_file(piece_path + "/LEEME.md")
    
    required_sections = [
        "# ",           # Title (any title)
        "## Qué Hace",
        "## Qué No Hace",
        "## Uso Rápido",
        "## Entrada",
        "## Salida",
        "## Errores Comunes"
    ]
    
    for each section in required_sections:
        if section not in content:
            fail("Missing section in LEEME.md: " + section)
    
    # Check that 'Qué No Hace' has content
    que_no_hace_content = extract_section(content, "## Qué No Hace")
    if que_no_hace_content is empty:
        fail("The 'Qué No Hace' section cannot be empty")
```

### 1.4 Minimum Tests

#### Mandatory Cases

| Case | Required Files |
|------|---------------------|
| Basic case | `caso-basico.entrada`, `caso-basico.salida` |
| Complete case | `caso-completo.entrada`, `caso-completo.salida` |
| Empty input error | `error-entrada-vacia.entrada`, `error-entrada-vacia.salida`, `error-entrada-vacia.codigo` |
| Missing field error | `error-campo-faltante.entrada`, `error-campo-faltante.salida`, `error-campo-faltante.codigo` |

**Verification code:**

```
verify_minimum_tests(piece_path):
    tests_path = piece_path + "/pruebas"
    
    mandatory_cases = [
        ("caso-basico", false),
        ("caso-completo", false),
        ("error-entrada-vacia", true),
        ("error-campo-faltante", true)
    ]
    
    for each (name, is_error) in mandatory_cases:
        input = tests_path + "/" + name + ".entrada"
        output = tests_path + "/" + name + ".salida"
        
        if not exists(input):
            fail("Mandatory test missing: " + name + ".entrada")
        if not exists(output):
            fail("Mandatory test missing: " + name + ".salida")
        
        if is_error:
            code = tests_path + "/" + name + ".codigo"
            if not exists(code):
                fail("Exit code file missing: " + name + ".codigo")
```

---

## Level 2 Checks: Functionality

### 2.1 Executable Responds

| Check | Criterion |
|--------------|----------|
| `--ayuda` works | Returns code 0 and prints text |
| `--version` works | Returns code 0 and prints a version |
| Version matches | The printed version matches PIEZA.usee |

**Verification code:**

```
verify_executable_responds(piece_path):
    executable = piece_path + "/ejecutar"
    
    # Verify --ayuda
    result = run_command(executable + " --ayuda")
    if result.code != 0:
        fail("The --ayuda command did not return code 0")
    if result.output is empty:
        fail("The --ayuda command produced no output")
    
    # Verify --version
    result = run_command(executable + " --version")
    if result.code != 0:
        fail("The --version command did not return code 0")
    
    # Verify version match
    executable_version = result.output.trim()
    metadata_version = read_ftu(piece_path + "/PIEZA.usee")["version"]
    if executable_version != metadata_version:
        fail("Executable version (" + executable_version + 
             ") does not match PIEZA.usee (" + metadata_version + ")")
```

### 2.2 Example Works

| Check | Criterion |
|--------------|----------|
| Processes the example input | No error produced |
| Output matches | The output matches SALIDA.ejemplo |
| Exit code is 0 | Successful execution |

**Verification code:**

```
verify_example_works(piece_path):
    executable = piece_path + "/ejecutar"
    input = read_file(piece_path + "/ENTRADA.ejemplo")
    expected_output = read_file(piece_path + "/SALIDA.ejemplo")
    
    result = run_command(executable, stdin=input)
    
    if result.code != 0:
        fail("The example input produced an error (code " + result.code + ")")
    
    if not outputs_equivalent(result.output, expected_output):
        fail("The output does not match SALIDA.ejemplo")
```

### 2.3 Tests Pass

| Check | Criterion |
|--------------|----------|
| Every test runs | None produces an unexpected error |
| Outputs match | Every output matches the expected one |
| Codes match | Exit codes match the expected ones |

**Verification code:**

```
verify_tests_pass(piece_path):
    executable = piece_path + "/ejecutar"
    tests_path = piece_path + "/pruebas"
    
    input_files = list_files(tests_path, "*.entrada")
    
    for each input_file in input_files:
        case_name = extract_name(input_file)
        
        input = read_file(input_file)
        expected_output = read_file(tests_path + "/" + case_name + ".salida")
        
        # Expected code: 0 for normal cases, read from file for errors
        if exists(tests_path + "/" + case_name + ".codigo"):
            expected_code = number(read_file(tests_path + "/" + case_name + ".codigo"))
        else:
            expected_code = 0
        
        result = run_command(executable, stdin=input, timeout=30)
        
        # Verify exit code
        if result.code != expected_code:
            fail("Test '" + case_name + "': expected code " + 
                 expected_code + ", got " + result.code)
        
        # Verify output (stdout for success, stderr for errors)
        if expected_code == 0:
            actual_output = result.stdout
        else:
            actual_output = result.stderr
        
        if not outputs_equivalent(actual_output, expected_output):
            fail("Test '" + case_name + "': output does not match")
    
    report("All tests passed: " + length(input_files) + " cases")
```

### 2.4 Output Comparison

Output comparison must be flexible enough to ignore insignificant differences:

```
outputs_equivalent(actual_output, expected_output):
    # Parse both as FTU
    actual_records = parse_ftu(actual_output)
    expected_records = parse_ftu(expected_output)
    
    # They must have the same number of records
    if length(actual_records) != length(expected_records):
        return false
    
    for i in range(length(actual_records)):
        actual = actual_records[i]
        expected = expected_records[i]
        
        # Every expected key must exist with an equivalent value
        for each key in expected:
            if key not in actual:
                return false
            
            if not values_equivalent(actual[key], expected[key]):
                return false
    
    return true

values_equivalent(actual_value, expected_value):
    # Wildcard: * matches any value
    if expected_value == "*":
        return true
    
    # Exact comparison after normalizing whitespace
    return normalize(actual_value) == normalize(expected_value)
```

**Wildcard usage in tests:**

```
# SALIDA.ejemplo with dynamic values
estado: ok
sesion_id: *
expira: *
usuario_id: usr_001
```

The `*` wildcard allows fields such as `sesion_id` (which changes every run) to pass verification.

---

## Level 3 Checks: Quality

### 3.1 Response Time

| Check | Criterion |
|--------------|----------|
| Basic case | Completes in under 5 seconds |
| Average | Average over 10 runs < 2 seconds |
| Consistency | Standard deviation < 50% of the average |

**Verification code:**

```
verify_response_time(piece_path):
    executable = piece_path + "/ejecutar"
    input = read_file(piece_path + "/ENTRADA.ejemplo")
    
    times = []
    
    for i in range(10):
        start = current_time()
        result = run_command(executable, stdin=input)
        end = current_time()
        
        if result.code != 0:
            fail("Error during performance test on iteration " + i)
        
        times.append(end - start)
    
    # Verify no run exceeds 5 seconds
    if max(times) > 5.0:
        fail("Response time exceeds 5 seconds: " + max(times))
    
    # Verify average
    average = sum(times) / length(times)
    if average > 2.0:
        warn("Average response time is high: " + average + "s")
    
    # Verify consistency
    deviation = standard_deviation(times)
    if deviation > average * 0.5:
        warn("Inconsistent response time: deviation " + deviation)
    
    report("Average response time: " + average + "s")
```

### 3.2 Error Handling

| Check | Criterion |
|--------------|----------|
| Errors to stderr | Errors are written to stderr, not stdout |
| Error format | Errors follow the FTU format with mandatory fields |
| Consistent codes | Exit codes follow the USEE convention |

**Verification code:**

```
verify_error_handling(piece_path):
    executable = piece_path + "/ejecutar"
    
    # Run with empty input
    result = run_command(executable, stdin="")
    
    if result.code == 0:
        fail("Empty input should produce an error")
    
    if result.stdout is not empty:
        warn("Errors should go to stderr, not stdout")
    
    # Verify error format
    error = parse_ftu(result.stderr)
    
    mandatory_error_fields = ["estado", "codigo", "mensaje"]
    for each field in mandatory_error_fields:
        if field not in error[0]:
            fail("Malformed error: missing field '" + field + "'")
    
    if error[0]["estado"] != "error":
        fail("The 'estado' field must be 'error' in error responses")
    
    # Verify valid exit code
    if result.code < 1 or result.code > 99:
        warn("Exit code outside the recommended range (1-99): " + result.code)
```

### 3.3 Clear Documentation

| Check | Criterion |
|--------------|----------|
| Short description is short | Maximum 100 characters, one sentence |
| 'Qué No Hace' has content | At least 2 listed items |
| Errors documented | Every error code from the tests appears in LEEME.md |
| Examples present | The Quick Start section has runnable code |

**Verification code:**

```
verify_clear_documentation(piece_path):
    metadata = read_ftu(piece_path + "/PIEZA.usee")
    readme = read_file(piece_path + "/LEEME.md")
    
    # Verify short description
    desc = metadata["descripcion_corta"]
    if length(desc) > 100:
        fail("Short description exceeds 100 characters")
    if desc.contains(". "):  # More than one sentence
        warn("Short description should be a single sentence")
    
    # Verify 'Qué No Hace'
    section = extract_section(readme, "## Qué No Hace")
    content_lines = count_non_empty_lines(section)
    if content_lines < 2:
        warn("The 'Qué No Hace' section should list at least 2 things")
    
    # Verify test error codes are documented
    error_codes = extract_test_error_codes(piece_path + "/pruebas")
    errors_section = extract_section(readme, "## Errores Comunes")
    
    for each code in error_codes:
        if code not in errors_section:
            warn("Error code '" + code + "' is not documented in LEEME.md")
    
    # Verify runnable example
    usage_section = extract_section(readme, "## Uso Rápido")
    if not contains_code_block(usage_section):
        warn("The 'Uso Rápido' section should include a code example")
```

### 3.4 Adherence to Conventions

| Check | Criterion |
|--------------|----------|
| Valid FTU keys | Every output key follows FTU rules |
| Correct booleans | Uses `si`/`no`, not `true`/`false` |
| ISO dates | Dates use the ISO 8601 format |
| No extra fields in errors | Errors only contain documented fields |

**Verification code:**

```
verify_conventions(piece_path):
    executable = piece_path + "/ejecutar"
    input = read_file(piece_path + "/ENTRADA.ejemplo")
    
    result = run_command(executable, stdin=input)
    output = parse_ftu(result.stdout)
    
    for each record in output:
        for each (key, value) in record:
            # Verify key format
            if not matches(key, "^[a-z][a-z0-9_.]*$"):
                fail("Invalid key in output: '" + key + "'")
            
            # Verify booleans
            if value.lower() in ["true", "false"]:
                warn("Use 'si'/'no' instead of 'true'/'false': " + key)
            
            # Verify dates
            if looks_like_date(value) and not is_iso8601(value):
                warn("Date does not follow ISO 8601 format: " + key + "=" + value)
```

---

## Verification Report

The verifier produces a structured report in FTU format:

### Report Structure

```
# USEE Verification Report
# Generated: 2025-01-15T10:30:00Z

pieza.nombre: login
pieza.version: 1.0.0
pieza.ruta: /path/to/login

verificacion.fecha: 2025-01-15T10:30:00Z
verificacion.version_verificador: 1.0.0
verificacion.resultado: aprobado

nivel1.resultado: aprobado
nivel1.archivos: ok
nivel1.metadatos: ok
nivel1.documentacion: ok
nivel1.pruebas_minimas: ok

nivel2.resultado: aprobado
nivel2.ejecutable: ok
nivel2.ejemplo: ok
nivel2.pruebas: ok
nivel2.pruebas_total: 8
nivel2.pruebas_pasaron: 8

nivel3.resultado: aprobado
nivel3.tiempo_respuesta_promedio_ms: 45
nivel3.manejo_errores: ok
nivel3.documentacion: ok
nivel3.convenciones: ok

advertencias: 0
---
# No warnings
```

### Report with Errors

```
pieza.nombre: login
pieza.version: 1.0.0

verificacion.fecha: 2025-01-15T10:30:00Z
verificacion.resultado: rechazado

nivel1.resultado: rechazado
nivel1.archivos: error
nivel1.metadatos: ok
nivel1.documentacion: ok
nivel1.pruebas_minimas: ok

errores: 1
---
tipo: error
nivel: 1
verificacion: archivos
codigo: archivo_faltante
mensaje: Mandatory file missing: ENTRADA.ejemplo
```

### Report with Warnings

```
pieza.nombre: login
pieza.version: 1.0.0

verificacion.fecha: 2025-01-15T10:30:00Z
verificacion.resultado: aprobado_con_advertencias

nivel1.resultado: aprobado
nivel2.resultado: aprobado
nivel3.resultado: aprobado_con_advertencias

advertencias: 2
---
tipo: advertencia
nivel: 3
verificacion: documentacion
codigo: que_no_hace_corto
mensaje: The 'Qué No Hace' section should list at least 2 things
---
tipo: advertencia
nivel: 3
verificacion: convenciones
codigo: booleano_incorrecto
mensaje: Use 'si'/'no' instead of 'true'/'false': activo
campo: activo
valor_actual: true
valor_sugerido: si
```

---

## Verification States

| State | Meaning | Can Publish |
|--------|-------------|----------------|
| `aprobado` | Every level passed with no warnings | ✓ Yes |
| `aprobado_con_advertencias` | Every level passed with warnings | ✓ Yes |
| `rechazado` | At least one level failed | ✗ No |

### Approval Rules

```
determine_state(results):
    # If any level has errors, reject
    if results.nivel1.has_errors():
        return "rechazado"
    if results.nivel2.has_errors():
        return "rechazado"
    if results.nivel3.has_errors():
        return "rechazado"
    
    # If there are warnings, approve with warnings
    if results.has_warnings():
        return "aprobado_con_advertencias"
    
    # Everything clean
    return "aprobado"
```

---

## Running the Verifier

### Command

```bash
usee-verificar /path/to/piece
```

### Options

| Option | Description |
|--------|-------------|
| `--nivel=N` | Run only up to level N (1, 2, or 3) |
| `--reporte=PATH` | Save the report to a file |
| `--json` | Output in JSON format |
| `--silencioso` | Only show the final result |
| `--verbose` | Show details of every check |
| `--timeout=S` | Timeout in seconds for tests (default: 30) |

### Usage Examples

```bash
# Full verification
usee-verificar ./my-piece

# Only verify structure
usee-verificar ./my-piece --nivel=1

# Save the report
usee-verificar ./my-piece --reporte=reporte.usee

# Silent verification for CI/CD
usee-verificar ./my-piece --silencioso
echo $?  # 0 = approved, 1 = rejected
```

### Terminal Output

```
$ usee-verificar ./login

═══════════════════════════════════════════════════════
  USEE Verifier v1.0.0
  Piece: login v1.0.0
═══════════════════════════════════════════════════════

Level 1: Structure
  ✓ Mandatory files
  ✓ PIEZA.usee metadata
  ✓ LEEME.md documentation
  ✓ Minimum tests (4 cases)

Level 2: Functionality
  ✓ Executable responds (--ayuda, --version)
  ✓ Example works
  ✓ Tests pass (8/8)

Level 3: Quality
  ✓ Response time (45ms average)
  ✓ Error handling
  ⚠ Documentation (1 warning)
  ✓ Conventions

───────────────────────────────────────────────────────
Warnings (1):
  • The 'Qué No Hace' section should list at least 2 things
───────────────────────────────────────────────────────

Result: APPROVED WITH WARNINGS

The piece can be published to the marketplace.
Consider addressing the warnings to improve quality.
```

---

## Verification in CI/CD

### GitHub Actions

```yaml
name: Verify USEE Piece

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Install USEE verifier
        run: |
          curl -sSL https://usee.dev/instalar.sh | sh
      
      - name: Verify piece
        run: |
          usee-verificar . --silencioso
      
      - name: Generate report
        if: always()
        run: |
          usee-verificar . --reporte=reporte.usee
      
      - name: Upload report
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: verification-report
          path: reporte.usee
```

### Pre-commit Script

```bash
#!/bin/bash
# .git/hooks/pre-commit

echo "Verifying USEE piece..."

if ! usee-verificar . --nivel=2 --silencioso; then
    echo "Error: The piece does not pass Level 2 verification"
    echo "Run 'usee-verificar .' to see details"
    exit 1
fi

echo "Verification passed"
exit 0
```

---

## Continuous Verification in the Marketplace

Once published, pieces are verified periodically:

### Periodic Checks

| Frequency | Check |
|------------|--------------|
| Every hour | Availability (for pieces with an HTTP adapter) |
| Daily | Full tests |
| Weekly | Dependency analysis |
| Monthly | Compatibility review |

### Computed Metrics

| Metric | Computation |
|---------|---------|
| `tiempo_operacional` | Days since publication without incompatible changes |
| `cambios_promedio_mes` | Updates in the last 30 days |
| `disponibilidad` | Percentage of successful checks |
| `tiempo_respuesta_p95` | 95th percentile of response times |

### Alerts

The marketplace notifies the creator when:

- A daily verification fails
- Response time increases significantly
- A dependency has known vulnerabilities
- Availability drops below 99%

---

## Summary

The USEE Verifier guarantees that every piece:

| Level | Guarantee |
|-------|----------|
| **1. Structure** | Exists and is well-formed |
| **2. Functionality** | Does what it says it does |
| **3. Quality** | Is reliable for production |

Only pieces that pass all three levels can be published to the marketplace, ensuring users always find pieces that work.

---

**USEE Verifier**: Automated trust.
