Quota Hardening Pass
Closed six tracking and hard-stop gaps so plan quotas can’t be bypassed in the wild: public dashboards, AI suggest endpoints, provider config tests, bulk invites, buffer-aware check_limit, and storage cache.
The platform changes every week. Here is what landed and why — without the marketing varnish.
Closed six tracking and hard-stop gaps so plan quotas can’t be bypassed in the wild: public dashboards, AI suggest endpoints, provider config tests, bulk invites, buffer-aware check_limit, and storage cache.
Automations now bind directly to metric, widget, and dashboard IDs. Drop in “Automate this widget” from any chart, evaluate conditions on the metric runtime, and manage everything from a new Dashboard Settings → Automations tab.
Every automation run streams step-by-step over Server-Sent Events. The new AutomationFlowGraph component shows the run live, and any historical run can be replayed with the same visual.
Validate-then-save with an AI repair loop, preview & backtest, cost preview, dead-letter queue, circuit breaker, version history, templates, and test events. Automations that fail repeatedly land in the DLQ with a fix path; nothing silently disappears.
Closed seven unenforced cycle metrics. The Plan Limits surface in the admin and the Billing UI now share one PlanLimit table, with monthly reset windows that actually fire.
Persistent test diagnostics, friendly error messages, request timeouts, a new /test-config endpoint, health chips on the connection list, and a config viewer. No more “works on my laptop”.
StarSchema becomes the canonical model; SemanticModelView curates AI prompts and validates dashboards. Widget registry, generate-time safe-degrade, schema-impact pre-flight, and validator caching closed the 13-item hardening backlog.
Closed AI-token and feature/quota gaps. New track_ai_tokens and track_ai_call dependencies plus a capture_ai_usage context manager so every model call books usage correctly.
Removed the Discover surface, fixed AI Apply with a typed simulate endpoint, threaded star_schema_id through the metric runtime, and shipped the /impact endpoint.
Governed metrics live inside the AI context now. The widget validator self-corrects against the diagnose endpoint; measure auto-promotion lifts well-defined metrics into the curated layer automatically.
Full audit identified three critical and nine high-severity vulnerabilities (since closed), and mapped scalability bottlenecks. Pursuing SOC 2 Type II, ISO 27001, and GDPR alignment for the 1000+ org scale target.
Fixed the Anthropic SDK event-name mismatch and worked around the GCP load balancer’s gzip behavior that kills Server-Sent Events. Streaming AI dashboard generation actually streams.
Imports, refreshes, and data prep jobs run on a dedicated Cloud Tasks worker tier with per-tenant concurrency caps and auto-folders for the file tree.
Full deployment to Cloud Run + Cloud SQL + GCS, with a one-command deploy.sh handling redeploys. The product name moved from “AI Dashboard” to LumenQube.
Row- and column-level security wired through the query engine and the AI prompt layer, so Agents can’t see (or leak) data the user can’t. Enhanced filters, drill-down, and AI security integration shipped together.
Replaced the Pandas-based query path with DuckDB. Multi-tenant hardened and ready to read parquet directly from GCS.
Hard tenant isolation across data, dashboards, and AI prompts. SAML and OIDC SSO. Stripe-powered Free / Pro / Business / Enterprise tiers with usage tracking, quota enforcement, and invoicing.