[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190109023832.GA12389@nautica>
Date: Wed, 9 Jan 2019 03:38:32 +0100
From: Dominique Martinet <asmadeus@...ewreck.org>
To: Hou Tao <houtao1@...wei.com>
Cc: v9fs-developer@...ts.sourceforge.net, lucho@...kov.net,
ericvh@...il.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, xingaopeng@...wei.com
Subject: Re: [PATCH] 9p: use inode->i_lock to protect i_size_write()
Hou Tao wrote on Wed, Jan 09, 2019:
> Use inode->i_lock to protect i_size_write(), else i_size_read() in
> generic_fillattr() may loop infinitely when multiple processes invoke
> v9fs_vfs_getattr() or v9fs_vfs_getattr_dotl() simultaneously under
> 32-bit SMP environment, and a soft lockup will be triggered as show below:
Hmm, I'm not familiar with the read/write seqcount code for 32 bit but I
don't understand how locking here helps besides slowing things down (so
if the value is constantly updated, the read thread might have a chance
to be scheduled between two updates which was harder to do before ; and
thus "solving" your soft lockup)
Instead, a better fix would be to update v9fs_stat2inode to first read
the inode size, and only call i_size_write if it changed - I'd bet this
also fixes the problem and looks better than locking to me.
(Can also probably reuse stat->length instead of the following
i_size_read for i_blocks...)
On the other hand it might make sense to also lock the inode for
stat2inode because we're dealing with partially updated inodes at time,
but if we do this I'd rather put the locking in v9fs_stat2inode and not
outside of it to catch all the places where it's used; but the readers
don't lock so I'm not sure it makes much sense.
There's also a window during which the inode's nlink is dropped down to
1 then set again appropriately if the extension is present; that's
rather ugly and we probably should only reset it to 1 if the attribute
wasn't set before... That can be another patch and/or I'll do it
eventually if you don't.
I hope what I said makes sense.
Thanks,
--
Dominique Martinet | Asmadeus
Powered by blists - more mailing lists