[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <560D505C.6000403@gmail.com>
Date: Thu, 1 Oct 2015 11:25:16 -0400
From: Austin S Hemmelgarn <ahferroin7@...il.com>
To: "sascha a." <sascha.arthur@...il.com>
Cc: linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org
Subject: Re: NFS / FuseFS Kernel Hangs Bug
On 2015-10-01 10:45, sascha a. wrote:
> Hello,
>
> Okay, i was wrong with FUSE and NFS thanks for the hint.
>
> About the Problem:
> Without digging deep into the kernel sources, your explaination is
> more or less that was i thinking about whats happening.
> Anyways, the reason why i report the Problem is that during this 120
> Seconds (until the Kernel solves this issue by killing (?) the
> process) the system is unusable.
>
> What i mean about it:
> Its not even possible to ssh on the server, even if /root and /home is
> local and should not be affected by the slow NFS Servers.
> Also it seems during this period a lot of network connections drop/freeze(?).
>
> Youre completly right when you says, theres no other way/its by design
> to wait for the NFS-Response. But in my point of view this 'wait' is
> happening on the wrong security level. If im not wrong the current
> implementation blocks/hangs tasks in kernelspace, or at least blocks
> the scheduler during this period.
If it's a single core system, and the kernel is configured with
PREEMPT_NONE, I could see that being possible, but I think Debian
(assuming you're using Debian because the /proc/version info that you
posted in the original e-mail indicated the kernel was a Debian build)
builds with PREEMPT_VOLUNTARY by default. I have little knowledge of
the internal workings of the kernel's NFS client implementation, so I'm
Cc'ing the associated mailing list.
If you don't need guaranteed data safety, you might try adding 'async'
to the share options on the NFS server in /etc/exports. Keep in mind
though that using this has a significant chance to cause data loss (from
the client's perspective that is, it will look like calls to write()
that returned success never happened) if the NFS server crashes, so you
should only do this if you are 100% certain that you can afford the
potential data loss. Also keep in mind that this violates the official
NFS standards, so some third-party software may lose it's mind when used
with this.
If you're just using NFSv3 you may also try using udp instead of tcp
(this actually doesn't hurt reliability much when used on a reliable
network), I've seen doing so more than double performance and cut server
load in half in some cases (maintaining a TCP connection is pretty
network and processor intensive, as it was designed for reliable
communications across unreliable networks).
> 2015-10-01 16:24 GMT+02:00 Austin S Hemmelgarn <ahferroin7@...il.com>:
>> On 2015-10-01 09:06, sascha a. wrote:
>>>
>>> Hello,
>>>
>>>
>>> I want to report a Bug with NFS / FuseFS.
>>>
>>> Theres trouble with mounting a NFS FS with FuseFS, if the NFS Server
>>> is slowly responding.
>>>
>>> The problem occurs, if you mount a NFS FS with FuseFS driver for
>>> example with this command:
>>>
>>> mount -t nfs -o vers=3,nfsvers=3,hard,intr,tcp server /dest
>>>
>>> Working on this nfs overlay works like a charm, as long as the NFS
>>> Server is not under heavy load. If it gets under HEAVY load from time
>>> to time the kernel hangs (which should in my opinion never ever
>>> occur).
>>
>> OK, before I start on an explanation of why what is happening is happening,
>> I should note that unless you're using some special FUSE driver instead of
>> the regular NFS tools, you're not using FUSE to mount the NFS share, you're
>> using a regular kernel driver.
>>
>> Now, on to the explanation:
>> This behavior is expected and unavoidable for any network filesystem under
>> the described conditions. Sync (or any other command that causes access to
>> the filesystem that isn't served by the local cache) requires sending a
>> command to the server. Sync in particular is _synchronous_ (and it should
>> be, otherwise you break the implied data safety from using it), which means
>> that it will wait until it gets a reply from the server before it returns,
>> which means that if the server is heavily loaded (or just ridiculously
>> slow), it will be a while before it returns. On top of this, depending on
>> how the server is caching data, it may take a long time to return even on a
>> really fast server with no other load.
>>
>> The stacktrace you posted indicates simply that the kernel noticed that
>> 'sync' was in an I/O sleep state (the 'D state' it refers to) for more than
>> 120 seconds, which is the default detection timeout for this.
>>
Download attachment "smime.p7s" of type "application/pkcs7-signature" (3019 bytes)
Powered by blists - more mailing lists