ArchiveBox now stores schedules in the database and lets the orchestrator materialize them into queued Crawl records at the right time. You no longer need host cron, user crontabs, or a separate archivebox_scheduler container when archivebox server is running.
How It Works
archivebox schedule ... creates a CrawlSchedule record plus a sealed template Crawl.
The long-running global orchestrator inside archivebox server watches enabled schedules.
When a schedule becomes due, the orchestrator creates a new queued Crawl.
That queued crawl is processed the same way as UI/API-submitted work.
One-shot foreground flows such as archivebox add ... continue to process only the crawl they were asked to run. They do not also sweep and execute unrelated scheduled crawls.
archivebox schedule --run-all enqueues every enabled schedule immediately.
archivebox schedule --foreground runs the global orchestrator in the foreground, which is useful outside archivebox server if you want a dedicated long-running scheduler/worker process without the web UI.
Running archivebox schedule --every=day with no import_path creates a recurring maintenance schedule that queues archivebox://update crawls.
Docker Compose
With the new orchestrator flow, you only need the main archivebox service:
Create schedules with:
If the main archivebox server container is already running, its orchestrator will pick up future scheduled runs automatically. There is no scheduler sidecar to restart.
Examples
Archive a Twitter mirror once a week:
Archive a subreddit and linked discussions once a week: