Database Schemas
Quint uses PostgreSQL with async SQLAlchemy ORM. Tables are partitioned by time range for event and score data, with row-level security for multi-tenant isolation.
Entity Relationship
Models
Customer
Accounts and metadata for each customer organization.
class Customer(Base):
__tablename__ = "customers"
id: UUID # Primary key
name: Text # Organization name
api_key_hash: Text # SHA-256 hash of API key
model_tier: Enum # starter | pro | enterprise
policies: JSONB # Customer security policies
created_at: DateTime(tz)
updated_at: DateTime(tz)
| Column | Type | Description |
|---|
id | UUID | Primary key |
name | Text | Organization name |
api_key_hash | Text | SHA-256 hash (never stores raw key) |
model_tier | Enum | starter (10K/day), pro (100K/day), enterprise (1M/day) |
policies | JSONB | Security policy configuration |
AgentEvent
Incoming security events from agent interception.
class AgentEvent(Base):
__tablename__ = "agent_events"
id: UUID # Partition key
customer_id: UUID # FK to Customer
event_data: JSONB # Full event payload
received_at: DateTime(tz) # Partition key (RANGE partitioned)
status: Enum # pending | scoring | scored | failed
retry_count: Int # Default 0
The agent_events table is partitioned by RANGE on received_at for efficient time-based queries and archival.
Indexes:
idx_events_pending — Composite index on status=pending events for orchestrator pickup
Score
Risk assessment results with full decomposition.
class Score(Base):
__tablename__ = "scores"
id: UUID # Partition key
event_id: UUID # FK to AgentEvent
customer_id: UUID # FK to Customer
rule_score: Int # Forward-chaining engine score
llm_score: Int # LLM score (if invoked)
graph_score: Int # GraphReasoner composite score
gnn_score: Float # GNN structural score
final_score: Int # Final 0-100 score
risk_level: Enum # none | low | medium | high | critical
reasoning: Text # Human-readable explanation
violations: JSONB # Array of policy violations
cache_hit: Bool # Whether cached score was used
llm_fallback: Bool # Whether LLM was invoked
scoring_source: Text # "graph_reasoner" | "graph_reasoner+llm"
compliance_refs: JSONB # Compliance article references
mitigations: JSONB # Recommended remediation
score_components: JSONB # Score breakdown array
behavioral_flags: JSONB # Flagged behaviors
score_decomposition: JSONB # 4-layer decomposition
confidence: Float # Model confidence 0.0-1.0
scored_at: DateTime(tz) # Partition key
Indexes:
idx_scores_customer — For per-customer score queries
idx_scores_risk — For risk_level filtering
EventCache
Signature-based caching for identical events.
class EventCache(Base):
__tablename__ = "event_cache"
id: UUID # Primary key
customer_id: UUID # FK to Customer
signature_hash: Text # Hash of event essentials
score: JSONB # Cached score response
cached_at: DateTime(tz)
Unique constraint: (customer_id, signature_hash) — One cached score per event signature per customer.
AuditLog
Operations audit trail for all system actions.
class AuditLog(Base):
__tablename__ = "audit_log"
id: BigInteger # Auto-increment PK
customer_id: UUID
container_id: Text # Modal container ID (nullable)
action: Text # Action performed
event_count: Int # Number of events (nullable)
metadata_: JSONB # Additional metadata
gpu_memory_bytes: BigInteger # GPU memory usage (nullable)
created_at: DateTime(tz)
DeadLetterEvent
Failed events stored for investigation and retry.
class DeadLetterEvent(Base):
__tablename__ = "dead_letter_events"
id: UUID # Primary key
original_event_id: UUID # Original event reference
customer_id: UUID
event_data: JSONB # Full event payload
error_type: Text # Error classification
error_detail: Text # Error message (nullable)
retry_count: Int # How many times retried
failed_at: DateTime(tz)
Database Queries
Events
create_event(session, customer_id, event_data) → AgentEvent
create_events_batch(session, customer_id, events) → list[AgentEvent]
get_pending_events(session, limit=1000) → dict[UUID, list[AgentEvent]]
get_event_by_id(session, event_id) → AgentEvent | None
update_event_status(session, event_id, status) → None
Scores
create_score(session, score_data) → Score
get_scores_by_customer(session, customer_id, page, per_page, risk_level, from_date, to_date) → tuple[list[Score], int]
get_score_by_event(session, event_id) → Score | None
get_customer_summary(session, customer_id) → dict
Policies
get_customer_policies(session, customer_id) → dict
update_customer_policies(session, customer_id, policies) → Customer | None
get_customer_by_api_key_hash(session, api_key_hash) → Customer | None