Linux Mint xfce 20.2.
Questions about using WebHTTrack. A few times in the log file it said that the internet connection dropped out, so a few pages were not copied.
Code: Select all
Error: "Connect Time Out" (-2) after 2 retries at link www.assoc-amazon.com/robots.txt (from web-site.php)
12:45:38 Warning: Retry after error -2 (Connect Time Out) at link
Code: Select all
resume a partial download
I don't know if I have to let the program run for a few hours again, just for 3 pages that did not get recorded.
Does WebHTTrack, completely skip the pages, files, etc it cannot get and keep going? Does it retry later on, or immediately? Can I select to retry more than once or twice?
Thanks.