Skip to Content

Architecture

The Native App is a hub-and-spoke deployment. The hub is a small Zeotap cloud control plane that handles browser auth and reverse-proxies UI traffic. The spokes are the SPCS services that run inside each customer’s Snowflake account. Each spoke is fully self-contained — it owns its metadata store, its event buffer, and its connection to the bound warehouse — and never reaches across to another customer’s install.

SPCS service inventory

Zeotap runs five long-running SPCS services and a varying number of ephemeral job services inside your account.

ServiceRoleRuntime
signalsmith_control_planeSame Zeotap API used by the cloud product. Serves UI requests forwarded by the cloud reverse-proxy.Long-running
signalsmith_postgresMetadata store for workspaces, sources, syncs, audiences, identity graphs, journeys, and run history. Persists to an SPCS block volume.Long-running
signalsmith_redisStreaming buffer for the event hot path. Decouples ingest from the forwarder so SDKs aren’t blocked on destination latency.Long-running
signalsmith_event_ingestAuthenticates the cloud’s forwarded ingest request and appends the validated payload to the streaming buffer.Long-running
signalsmith_event_forwarderBatch-consumes the streaming buffer and runs two lanes per batch: forwarding rules to destinations, and warehouse delivery to RAW_EVENTS.Long-running
signalsmith_job_*Ephemeral SPCS job services for sync runs, loaders, trait computations, identity resolution, journey execution, and AI agent sessions.One-shot

Service-to-service connections

Service-to-service connections inside the SPCS compute pool

The control plane is the only service that talks to all the others. It reads and writes Postgres for metadata, publishes to Redis when it triggers an event-ingest path that needs in-process buffering, and dispatches sync, loader, identity, trait, journey, and agent work as one-shot SPCS job services. The event-ingest service writes to Redis only; the forwarder consumes from Redis and writes both to destinations (through the bound EAIs) and to your Snowflake databases via the warehouse you bound at install.

Job services don’t have a long-running connection to the rest of the graph. When the control plane creates one, it passes the work payload through SPCS service arguments. The job opens its own connections to Postgres (for run-state writes), to the customer warehouse (for SQL execution), and to whatever EAIs the work requires. When the job finishes, SPCS removes the service.

The proxy chain

Browser-to-SPCS traffic takes four hops, each enforcing a different authentication contract.

Browser to SPCS proxy chain
  1. Browser → cloud. The user signs into composable.zeotap.com with email or Google. The UI sends an HTTPS request with a Firebase ID token in the Authorization header.
  2. Cloud → SPCS public ingress. The cloud control plane validates the Firebase token, looks up which Native App install the user’s workspace is bound to, fetches the encrypted keypair credential for that install, and re-signs the request with a JWT bearer based on the ${BRAND}_PROXY_USER keypair. It POSTs the original request body to the install’s SPCS public ingress URL.
  3. SPCS public ingress → control-plane container. Snowflake’s SPCS gateway terminates TLS, validates the JWT against the proxy user’s registered RSA public key, and forwards the request to the signalsmith_control_plane service over the internal SPCS network.
  4. Control plane → Postgres / warehouse. The control plane processes the request locally — read the metadata, run a sync, query the bound database — and writes the response back through the same chain in reverse.

The cloud never persists Firebase tokens; it mints a JWT per request and discards it. The keypair private key never leaves the cloud’s encrypted credential store. The customer’s data never round-trips through the cloud — only API request bodies and response bodies pass through, and the cloud doesn’t store them.

Event ingestion data flow

Events take a separate path optimised for high throughput and at-least-once delivery.

  1. SDK to cloud ingest. Web, mobile, and server SDKs POST to events.zeotap.com/v1/track regardless of whether the workspace runs cloud or Native App mode. The cloud ingest validates the write key.
  2. Cloud ingest to SPCS event-ingest. When the write key is bound to a Native App install, the cloud forwards the validated payload to that install’s signalsmith_event_ingest service through the same proxy chain used for UI calls.
  3. Event-ingest to Redis stream. The event-ingest service appends the payload to a Redis stream and returns 200 to the cloud, which returns 200 to the SDK. End-to-end latency from the SDK perspective is the proxy round-trip plus a Redis XADD.
  4. Forwarder fan-out. The forwarder polls the Redis stream and runs two lanes per batch:
    • Forwarding rules — real-time delivery to destinations through the bound EAIs. Server-side CAPI batchers, generic webhooks, and per-platform adapters all run in this lane.
    • Warehouse delivery — buffered batched INSERTs to CDP_EVENTS.RAW_EVENTS in the database your destination targets. Same buffer policy (size and time triggers) as cloud-mode warehouse delivery, but the write happens through the in-process Snowflake driver session — no Avro, no GCS staging, no COPY INTO.
  5. At-least-once with messageId dedup. SDKs already de-dup their own retries by messageId. Custom server-side integrations should set a stable messageId per event so the forwarder’s idempotent destination writes can de-dup retries downstream.

Schemas Zeotap creates

In each Snowflake database you configure as a source or destination:

SchemaContains
CDP_EVENTSRAW_EVENTS table holding all events the forwarder writes.
CDP_PLANNERPlan tables for sync orchestration (a/b slot rotation for diff-based CDC).
CDP_AUDITAudit tables for sync runs and pipeline observability.

Inside the application’s own footprint (zeotap.app_data):

ObjectContains
APP_VERSIONThe currently installed application version.
CLOUD_REGISTRATIONThe cloud-issued install ID and claim_url written by register_with_cloud.
CLOUD_CREDENTIALSThe keypair private key minted by GENERATE_PROXY_KEYPAIR.
REFERENCE_BINDINGSThe EAI references bound by the post-install scripts.

The application schemas are inside the app sandbox — only the application’s own procedures can read them, and they’re invisible to your other Snowflake roles.

Why this shape

The split between long-running services and ephemeral job services is what keeps the steady-state SPCS bill predictable. The five long-running services use a fixed compute pool node; sync, loader, identity, and trait work — which is bursty and concurrent — fans out to job services that exit when the run completes. The block volume on Postgres is the only persistent storage; everything else is rehydrated from Postgres on service restart.

The reverse-proxy chain rather than a direct browser-to-SPCS connection is what lets Zeotap ship the same UI to cloud and Native App customers. The browser doesn’t know whether the workspace it’s looking at runs in the cloud or in an SPCS install — both look like normal composable.zeotap.com API calls.

Last updated on