FS#43434 - Pacman - Negative Speed in progress bar
Attached to Project:
Pacman
Opened by Gagandeep Singh (Code) - Tuesday, 13 January 2015, 16:09 GMT
Last edited by Allan McRae (Allan) - Saturday, 14 January 2017, 08:00 GMT
Opened by Gagandeep Singh (Code) - Tuesday, 13 January 2015, 16:09 GMT
Last edited by Allan McRae (Allan) - Saturday, 14 January 2017, 08:00 GMT
|
Details
Summary and Info: Please look at the screenshot in
attachment.
Steps to Reproduce: Please look at the screenshot in attachment. |
This task depends upon
Closed by Allan McRae (Allan)
Saturday, 14 January 2017, 08:00 GMT
Reason for closing: Fixed
Additional comments about closing: git commit e83e868a7786
Saturday, 14 January 2017, 08:00 GMT
Reason for closing: Fixed
Additional comments about closing: git commit e83e868a7786
In the first screenshot it is clearly visible when the first mirror fails(speed reduces 1b/s) pacman switches to next mirror and before it attains full speed it shows negative speed and it happened because several other programs was running on my system using all the bandwidth and pacman didn't get any. And what happened showed the screenshot.
if the first fail then it start with the second and so, but for every new attempt with new server the negative download showed staring with mirror #2 onward.
Not happend with the first mirror, only with the second onward, maybe because the first is when pacman start downloading but the second is when resume.
It happend if I set the same mirror many times, from the second it start with the negative as descrived.
when the download is resumed and after the negative part the download is resumed but packan assume that the remaining size is the 100% of the pacage and therefor start from the propper bit but display it as 0%.
Let exampling:
Download razor-gtq (don't mind this is an example).
stop at 50 % bites equivalent to 5 kbits of 10 kbits
then try the next mirror
the negative download happend
the negative download start from -5 kbits
every time is refreshed put a minor value (-4 kbits, -1 kbit, 234 bits)
then start from 0% with 0 bits downloaded
then at 50 % has downloaded 2.5 kbits
then at 100 % has downloaded 5 kbits
then the package is completed, since the first part before fail was 5 kbit (50 %) and the second was 5 kbits (reported as 100 %)
checksums correct
Look like pacman count the 0-100% as the size to be downloaded from the mirror to complete the package instead the size of the full package indifferent of the remaining size like in any version previously to 4.1 (showing this is a regression).
Looking at the code, I think the negative values slip in through the rate_last averaging loop.
I'm attaching a patch that would limit the rate to 0.0 before the value is copied to rate_last.
The rate calculation is:
rate = (double)(xfered - xfered_last) / (timediff / 1000.0);
To be negative, xfered must be less that xfered_last. We need to figure out where this is happening.
Any change you can add some debug statement to print the values so we can track this down?
error: failed retrieving file '<file>' from <mirror> : Operation too slow. Less than 1 bytes/sec transferred the last 10 seconds
xfered_last: 14482997; xfered: 296538;
file_xfered: 296538; list_xfered: 0;
timediff: 200
Apparently, resuming a download and continuing from offset 0 is what creates the negative difference.
EDIT: looking a bit more into this, collecting a few facts:
* there is a call mode to reset the cb_dl_progress using file_xfered 0 and file_total=-1.
* there is a totaldownload mode, which does not allow to be reset.
should there be another static that stores the starting total value for when a totaldownload download is reset?
EDIT 2: this whole deal with static variables just doesn't cut it in the long run. I'm not sure what alternatives there are right now, but scoping like this will just keep giving grief.
A simple solution would be to reset downloaded amount when switching mirror. A proper fix is to figure out this resumption of downloads issue...
The dynamics of calling _alpm_download() is not handled at all with _alpm_dload_payload_reset() (be_sync.c:242 does it), which sync.c:950 and following does not.
We might add the suggested line above or integrate the parametric into _alpm_dload_payload_reset() properly.
payload->initial_size += payload->prevprogress;
Adding those two lines seems a proper fix to me.