lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb70e71ba0b1caafb96e136d83aa9c097f2a6930.camel@nokia.com>
Date:   Mon, 19 Aug 2019 12:25:52 +0000
From:   "Rantala, Tommi T. (Nokia - FI/Espoo)" <tommi.t.rantala@...ia.com>
To:     "linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
        "anna.schumaker@...app.com" <anna.schumaker@...app.com>,
        "trond.myklebust@...merspace.com" <trond.myklebust@...merspace.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: nfs4 server stops responding

Hello,

I have two VMs, exporting some directories in one VM:
# cat /etc/exports
/mnt 192.168.1.0/24(ro,fsid=0,no_subtree_check,sync)
/mnt/export
192.168.1.0/24(rw,no_root_squash,sync,no_wdelay,no_subtree_check)
[...]

And NFS mounting in the second VM:
# grep nfs /proc/mounts 
server:/export /mnt/export nfs4
rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,
acregmin=1,acregmax=1,acdirmin=1,acdirmax=1,hard,nordirplus,
proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.11,
local_lock=none,addr=192.168.1.10 0 0
[...]

If I keep some file descriptor open for several minutes in the second VM,
for example by running this:
# sleep 10m >/mnt/export/test

Then result is that the NFS mount stops responding: the sleep process
never finished but is "forever" stuck in (killable) D state, and any I/O
attempt from other processes in /mnt/export never finish.
It's always reproducible with this sleep command.
To recover the mountpoint I need to reboot the second VM.

Kernel version is 5.3.0-rc4 in both VMs.
Also reproducible with 4.14.x and 4.19.x

# ps aux|grep sleep
root      2524  0.0  0.0   5900   688 pts/0    D    14:04   0:00 sleep 5m

# grep -C100 nfs /proc/*/stack
/proc/2524/stack:[<0>] nfs4_do_close+0x87d/0xb20 [nfsv4]
/proc/2524/stack:[<0>] __put_nfs_open_context+0x297/0x4f0 [nfs]
/proc/2524/stack:[<0>] nfs_file_release+0xbe/0xf0 [nfs]
/proc/2524/stack-[<0>] __fput+0x1df/0x690
/proc/2524/stack-[<0>] task_work_run+0x123/0x1b0
/proc/2524/stack-[<0>] exit_to_usermode_loop+0x121/0x140
/proc/2524/stack-[<0>] do_syscall_64+0x2d1/0x370
/proc/2524/stack-[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
--
/proc/561/stack-[<0>] __rpc_execute+0x692/0xb10 [sunrpc]
/proc/561/stack-[<0>] rpc_run_task+0x45f/0x5d0 [sunrpc]
/proc/561/stack:[<0>] nfs4_call_sync_sequence+0x12a/0x210 [nfsv4]
/proc/561/stack:[<0>] _nfs4_proc_getattr+0x19a/0x200 [nfsv4]
/proc/561/stack:[<0>] nfs4_proc_getattr+0xda/0x230 [nfsv4]
/proc/561/stack:[<0>] __nfs_revalidate_inode+0x2ed/0x7a0 [nfs]
/proc/561/stack:[<0>] nfs_do_access+0x605/0xd00 [nfs]
/proc/561/stack:[<0>] nfs_permission+0x500/0x5e0 [nfs]
/proc/561/stack-[<0>] inode_permission+0x2dd/0x3f0
/proc/561/stack-[<0>] link_path_walk.part.60+0x681/0xe40
/proc/561/stack-[<0>] path_lookupat.isra.63+0x1af/0x850
/proc/561/stack-[<0>] filename_lookup.part.79+0x165/0x360
/proc/561/stack-[<0>] vfs_statx+0xb9/0x140
/proc/561/stack-[<0>] __do_sys_newstat+0x77/0xd0
/proc/561/stack-[<0>] do_syscall_64+0x9a/0x370
/proc/561/stack-[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9


In dmesg of second VM sometimes nfs complaints are seen:

[  386.362897] nfs: server xyz not responding, still trying

Any ideas what's going wrong here...?

-Tommi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ