[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAH2r5mv6KN+3_wjfAjawA8yfqGefA2fa_HkbU7YMovGg_jkYEQ@mail.gmail.com>
Date: Fri, 11 Jan 2019 00:13:10 -0600
From: Steve French <smfrench@...il.com>
To: LKML <linux-kernel@...r.kernel.org>
Subject: Fwd: scp bug due to progress indicator when copying from remote to
local on Linux
---------- Forwarded message ---------
From: Steve French <smfrench@...il.com>
Date: Fri, Jan 11, 2019 at 12:11 AM
Subject: scp bug due to progress indicator when copying from remote to
local on Linux
To: CIFS <linux-cifs@...r.kernel.org>, linux-fsdevel
<linux-fsdevel@...r.kernel.org>
Cc: CIFS <linux-cifs@...r.kernel.org>, Pavel Shilovsky <piastryyy@...il.com>
In discussing an interesting problem with scp recently with Pavel, we
found a fairly obvious bug in scp when it is run with a progress
indicator (which is the default when the source file is remote).
scp triggers "SIGALRM" probably from update_progress_meter() in
progressmeter.c when executed with "scp localhost:somelargefile /mnt"
ie with an ssh source path but a local path for the target, as long as
the flush of the large amount of cached data to disk (which seems to
be triggered by the ftruncate call in scp.c) takes more than a few
seconds (which can be common depending on disk or network speed).
It is interesting that this can be avoided easily by running in quiet
mode ("-q") which disables the progress meter in scp, but it seems
very, very broken that scp of a large file can 'fail' when using a
progress meter (unless caching were disabled on the file system) due
to the slight delay in ftruncate due to triggering a flush of a large
amount of cached data to disk sequentially.
Any thoughts of whether scp is actively maintained and best approach
to fix progressmeter.c so it doesn't break on Linux?
--
Thanks,
Steve
--
Thanks,
Steve
Powered by blists - more mailing lists