System status

Automated HTTP checks from inta.dev. Machine-readable snapshot: /api/status.json.

How we measure uptime

Short notes so you know what this page represents.

  • Synthetic checks: automated HTTP requests from our hosting provider to each public URL below β€” not real-user (RUM) monitoring.
  • Schedule: production runs about once per minute (your project’s cron configuration).
  • A check passes when the HTTP status is below 500; timeouts and network errors count as failed.
  • Timelines, incident log, latency trends, and the headline uptime percentage use stored checks from the last 2160 hours (UTC), up to 5000 samples per load (MongoDB TTL about 14 days). The headline figure also treats active operator notices and scheduled maintenance (when they apply) like downtime for that run.
  • All times on this page are UTC.

Scheduled maintenance

No in-progress or upcoming maintenance windows are published right now.

Last deploy

Commit

0d7b477 Β· View commit on GitHub

Branch

production

Message

chore: minify gdpr.dev.js β†’ uc.js, cb.dev.js β†’ cb.js [skip ci]

100% uptime

We run these checks automatically on a schedule. Over the last 2 days (UTC), we stored 5000 runs, and 4999 were fully up: every service responded normally in that run, with no active operator notice or scheduled maintenance applying at that moment.

All checks passingUpdated Apr 9, 2026, 9:20 AM UTC (stored, UTC)
  • Intastellar Consents

    https://www.intastellarconsents.com

    Recent checks β€” last 90 days (5000 samples)

    Response time (Intastellar Consents)

    Min 17 ms
    Max 756 ms
    Latest 33 ms
    HTTP 20033 ms
  • Consents CDN (uc.js)

    https://consents.cdn.intastellarsolutions.com/uc.js

    Recent checks β€” last 90 days (5000 samples)

    Response time (Consents CDN (uc.js))

    Min 16 ms
    Max 795 ms
    Latest 31 ms
    HTTP 20031 ms
  • inta.dev - Developer Portal

    https://inta.dev/

    Recent checks β€” last 90 days (5000 samples)

    Response time (inta.dev - Developer Portal)

    Min 25 ms
    Max 2120 ms
    Latest 35 ms
    HTTP 20035 ms
  • Intastellar Consents β€” analytics collect (health)

    https://analytics.intastellarsolutions.com/collect?health=1

    Recent checks β€” last 90 days (5000 samples)

    Not fully clear (UTC)

    • β€” Failed check

    Response time (Intastellar Consents β€” analytics collect (health))

    Min 62 ms
    Max 3992 ms
    Latest 279 ms
    HTTP 200279 ms

Operator notices

Updates posted by the team when we communicate an issue or follow-up (separate from automated probe history below).

  • Intastellar Consents (CMP) Dashboard - CORS Errors

    We have noticed some CORS errors with our APIs for Intastellar Consents CMP platform - causing currently blank page on the dashboard. We are working on the situation to fix it as fast as possible. Consent collection is still working & we do not see any problems with these endpoints. Monitors: Intastellar Consents.

    Updates

    • Update Β· Apr 6, 2026, 7:52 PM UTC Β· felix.schultz@intastellar.com Β· Identified β†’ Resolved

      We fixed the issue - some header requests werenΒ΄t allowed to connect. Which caused the APIΒ΄s to be blocked on the request. Everything is resolved & the Intastellar Consents dashboard, is up an running again.

    Monitors: Intastellar Consents

    Posted by felix.schultz@intastellar.com Β· Β· Resolved

    Resolved

Incident log

Footnote β€” hosting and configuration

This page is public. The details below are for teams that deploy inta.dev (environment variables, data retention).

Configure targets with STATUS_CHECK_TARGETS_JSON (full replace) or STATUS_CHECK_EXTRA_JSON (append). A check counts as passing when the HTTP status is below 500. The incident log shows stored cron runs where any target failed in that window, including probe error text when saved. Timelines, the incident log, and latency trends use the same rolling store: the last 2160 hours (UTC), up to 5000 samples per request (14-day TTL in Mongo). The headline uptime percentage uses the same window: runs count as up only when every target passed and the run time is outside operator notices and maintenance windows that apply (global or to those targets). Tune with STATUS_HISTORY_WINDOW_HOURS and STATUS_HISTORY_MAX_ROWS. Times on this page are UTC. New history rows store per-target latencyMs; older rows still drive up/down segments until they expire.