Architecture
The Native App is a hub-and-spoke deployment. The hub is a small Zeotap cloud control plane that handles browser auth and reverse-proxies UI traffic. The spokes are the SPCS services that run inside each customer’s Snowflake account. Each spoke is fully self-contained — it owns its metadata store, its event buffer, and its connection to the bound warehouse — and never reaches across to another customer’s install.
SPCS service inventory
Zeotap runs five long-running SPCS services and a varying number of ephemeral job services inside your account.
| Service | Role | Runtime |
|---|---|---|
signalsmith_control_plane | Same Zeotap API used by the cloud product. Serves UI requests forwarded by the cloud reverse-proxy. | Long-running |
signalsmith_postgres | Metadata store for workspaces, sources, syncs, audiences, identity graphs, journeys, and run history. Persists to an SPCS block volume. | Long-running |
signalsmith_redis | Streaming buffer for the event hot path. Decouples ingest from the forwarder so SDKs aren’t blocked on destination latency. | Long-running |
signalsmith_event_ingest | Authenticates the cloud’s forwarded ingest request and appends the validated payload to the streaming buffer. | Long-running |
signalsmith_event_forwarder | Batch-consumes the streaming buffer and runs two lanes per batch: forwarding rules to destinations, and warehouse delivery to RAW_EVENTS. | Long-running |
signalsmith_job_* | Ephemeral SPCS job services for sync runs, loaders, trait computations, identity resolution, journey execution, and AI agent sessions. | One-shot |
Service-to-service connections
The control plane is the only service that talks to all the others. It reads and writes Postgres for metadata, publishes to Redis when it triggers an event-ingest path that needs in-process buffering, and dispatches sync, loader, identity, trait, journey, and agent work as one-shot SPCS job services. The event-ingest service writes to Redis only; the forwarder consumes from Redis and writes both to destinations (through the bound EAIs) and to your Snowflake databases via the warehouse you bound at install.
Job services don’t have a long-running connection to the rest of the graph. When the control plane creates one, it passes the work payload through SPCS service arguments. The job opens its own connections to Postgres (for run-state writes), to the customer warehouse (for SQL execution), and to whatever EAIs the work requires. When the job finishes, SPCS removes the service.
The proxy chain
Browser-to-SPCS traffic takes four hops, each enforcing a different authentication contract.
- Browser → cloud. The user signs into
composable.zeotap.comwith email or Google. The UI sends an HTTPS request with a Firebase ID token in theAuthorizationheader. - Cloud → SPCS public ingress. The cloud control plane validates the Firebase token, looks up which Native App install the user’s workspace is bound to, fetches the encrypted keypair credential for that install, and re-signs the request with a JWT bearer based on the
${BRAND}_PROXY_USERkeypair. It POSTs the original request body to the install’s SPCS public ingress URL. - SPCS public ingress → control-plane container. Snowflake’s SPCS gateway terminates TLS, validates the JWT against the proxy user’s registered RSA public key, and forwards the request to the
signalsmith_control_planeservice over the internal SPCS network. - Control plane → Postgres / warehouse. The control plane processes the request locally — read the metadata, run a sync, query the bound database — and writes the response back through the same chain in reverse.
The cloud never persists Firebase tokens; it mints a JWT per request and discards it. The keypair private key never leaves the cloud’s encrypted credential store. The customer’s data never round-trips through the cloud — only API request bodies and response bodies pass through, and the cloud doesn’t store them.
Event ingestion data flow
Events take a separate path optimised for high throughput and at-least-once delivery.
- SDK to cloud ingest. Web, mobile, and server SDKs POST to
events.zeotap.com/v1/trackregardless of whether the workspace runs cloud or Native App mode. The cloud ingest validates the write key. - Cloud ingest to SPCS event-ingest. When the write key is bound to a Native App install, the cloud forwards the validated payload to that install’s
signalsmith_event_ingestservice through the same proxy chain used for UI calls. - Event-ingest to Redis stream. The event-ingest service appends the payload to a Redis stream and returns 200 to the cloud, which returns 200 to the SDK. End-to-end latency from the SDK perspective is the proxy round-trip plus a Redis
XADD. - Forwarder fan-out. The forwarder polls the Redis stream and runs two lanes per batch:
- Forwarding rules — real-time delivery to destinations through the bound EAIs. Server-side CAPI batchers, generic webhooks, and per-platform adapters all run in this lane.
- Warehouse delivery — buffered batched INSERTs to
CDP_EVENTS.RAW_EVENTSin the database your destination targets. Same buffer policy (size and time triggers) as cloud-mode warehouse delivery, but the write happens through the in-process Snowflake driver session — no Avro, no GCS staging, noCOPY INTO.
- At-least-once with
messageIddedup. SDKs already de-dup their own retries bymessageId. Custom server-side integrations should set a stablemessageIdper event so the forwarder’s idempotent destination writes can de-dup retries downstream.
Schemas Zeotap creates
In each Snowflake database you configure as a source or destination:
| Schema | Contains |
|---|---|
CDP_EVENTS | RAW_EVENTS table holding all events the forwarder writes. |
CDP_PLANNER | Plan tables for sync orchestration (a/b slot rotation for diff-based CDC). |
CDP_AUDIT | Audit tables for sync runs and pipeline observability. |
Inside the application’s own footprint (zeotap.app_data):
| Object | Contains |
|---|---|
APP_VERSION | The currently installed application version. |
CLOUD_REGISTRATION | The cloud-issued install ID and claim_url written by register_with_cloud. |
CLOUD_CREDENTIALS | The keypair private key minted by GENERATE_PROXY_KEYPAIR. |
REFERENCE_BINDINGS | The EAI references bound by the post-install scripts. |
The application schemas are inside the app sandbox — only the application’s own procedures can read them, and they’re invisible to your other Snowflake roles.
Why this shape
The split between long-running services and ephemeral job services is what keeps the steady-state SPCS bill predictable. The five long-running services use a fixed compute pool node; sync, loader, identity, and trait work — which is bursty and concurrent — fans out to job services that exit when the run completes. The block volume on Postgres is the only persistent storage; everything else is rehydrated from Postgres on service restart.
The reverse-proxy chain rather than a direct browser-to-SPCS connection is what lets Zeotap ship the same UI to cloud and Native App customers. The browser doesn’t know whether the workspace it’s looking at runs in the cloud or in an SPCS install — both look like normal composable.zeotap.com API calls.