It's probably worth kicking together a small REPL to trigger (re)processing for a URL.
If I've published content at abcd
, the cronjob will process it when it appears in my RSS feed.
But... if I later come back and update that content, any new links won't get archived because the RSS feed won't change (nor would we really want it to).
Activity
13-Aug-24 12:04
assigned to @btasker
22-Aug-24 07:18
mentioned in commit 6b6dcc47f3d244089defbea76e38f9fd9752fa0f
Message
feat: add CLI to manually preserve a URL and sublinks (utilities/auto-blog-link-preserver#20)
There are some niceties which need to be added before release though
22-Aug-24 07:30
This is a basic validation and needs at least the following adding:
I think it should also be moved under a subcommand. That way rather than having lots of scripts dotted about we'll have one which supports different things
23-Aug-24 07:22
mentioned in commit 1c2fd76e2ad52de9f12d13ed221fe2b94294f301
Message
feat: move addition to a subcommand (utilities/auto-blog-link-preserver#20)
Invocation is
./preserve.py submit https://example.com