[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170811104250.GV25347@redhat.com>
Date: Fri, 11 Aug 2017 12:42:50 +0200
From: Andrea Arcangeli <aarcange@...hat.com>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: mhocko@...nel.org, akpm@...ux-foundation.org, kirill@...temov.name,
oleg@...hat.com, wenwei.tww@...baba-inc.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm, oom: fix potential data corruption when
oom_reaper races with writer
On Fri, Aug 11, 2017 at 12:22:56PM +0200, Andrea Arcangeli wrote:
> disk block? This would happen on ext4 as well if mounted with -o
> journal=data instead of -o journal=ordered in fact, perhaps you simply
Oops above I meant journal=writeback, journal=data is even stronger
than journal=ordered of course.
And I shall clarify further that old disk content can only showup
legitimately on journal=writeback after a hard reboot or crash or in
general an unclean unmount. Even if there's no journaling at all
(i.e. ext2/vfat) old disk content cannot be shown at any given time no
matter what if there's no unclean unmount that requires a journal
reply.
This theory of a completely unrelated fs bug showing you disk content
as result of the OOM reaper induced SIGBUS interrupting a
copy_from_user at its very start, is purely motivated by the fact like
Michal I didn't see much explanation on the VM side that could cause
those not-zero not-0xff values showing up in the buffer of the write
syscall. You can try to change fs and see if it happens again to rule
it out. If it always happens regardless of the filesystem used, then
it's likely not a fs bug of course. You've got an entire and aligned
4k fs block showing up that data.
Powered by blists - more mailing lists