The crawler should read a list of predefined domains from config and crawl pages on those domains. It should store
assigned to @btasker
mentioned in issue #1
mentioned in commit 68b856ea787c19ad0cfc1bafd0b3a0e4fd959803
Commit: 68b856ea787c19ad0cfc1bafd0b3a0e4fd959803 Author: B Tasker Date: 2023-12-28T13:06:14.000+00:00
Start creating a crawler (utilities/file_location_listing#2)
This fetches a single page and calculates an index file entry for it.
mentioned in issue #3
Activity
28-Dec-23 12:11
assigned to @btasker
28-Dec-23 12:11
mentioned in issue #1
28-Dec-23 13:29
mentioned in commit 68b856ea787c19ad0cfc1bafd0b3a0e4fd959803
Message
Start creating a crawler (utilities/file_location_listing#2)
This fetches a single page and calculates an index file entry for it.
28-Dec-23 16:39
mentioned in issue #3