[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <237a9064-78bb-1fe9-4293-1409a705d2d1@I-love.SAKURA.ne.jp>
Date: Tue, 2 May 2023 19:13:43 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Alexander Viro <viro@...iv.linux.org.uk>
Cc: akpm@...ux-foundation.org, hughd@...gle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
syzkaller-bugs@...glegroups.com,
syzbot <syzbot+702361cf7e3d95758761@...kaller.appspotmail.com>,
Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: [syzbot] [mm?] KCSAN: data-race in generic_fillattr / shmem_mknod
(2)
On 2023/05/01 23:05, Tetsuo Handa wrote:
>> Also, there was a similar report on updating i_{ctime,mtime} to current_time()
>> which means that i_size is not the only field that is causing data race.
>> https://syzkaller.appspot.com/bug?id=067d40ab9ab23a6fa0a8156857ed54e295062a29
>
> Do we want to as well wrap i_{ctime,mtime} using data_race() ?
>
I think we need to use inode_lock_shared()/inode_unlock_shared() when calling
generic_fillattr(), for i_{ctime,mtime} (128bits) are too large to copy atomically.
Is it safe to call inode_lock_shared()/inode_unlock_shared() from generic_fillattr()?
Is some filesystem already holding inode lock before calling generic_fillattr()?
Powered by blists - more mailing lists