[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200218151813.3yzzb2hzlmtbf5xg@box>
Date: Tue, 18 Feb 2020 18:18:13 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Qian Cai <cai@....pw>
Cc: Marco Elver <elver@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
syzbot <syzbot+c034966b0b02f94f7f34@...kaller.appspotmail.com>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>
Subject: Re: [PATCH -next] fork: annotate a data race in vm_area_dup()
On Tue, Feb 18, 2020 at 10:00:35AM -0500, Qian Cai wrote:
> On Tue, 2020-02-18 at 15:09 +0100, Marco Elver wrote:
> > On Tue, 18 Feb 2020 at 13:40, Qian Cai <cai@....pw> wrote:
> > >
> > >
> > >
> > > > On Feb 18, 2020, at 5:29 AM, Kirill A. Shutemov <kirill@...temov.name> wrote:
> > > >
> > > > I think I've got this:
> > > >
> > > > vm_area_dup() blindly copies all fields of orignal VMA to the new one.
> > > > This includes coping vm_area_struct::shared.rb which is normally protected
> > > > by i_mmap_lock. But this is fine because the read value will be
> > > > overwritten on the following __vma_link_file() under proper protectection.
> > >
> > > Right, multiple processes could share the same file-based address space where those vma have been linked into address_space::i_mmap via vm_area_struct::shared.rb. Thus, the reader could see its shared.rb linkage pointers got updated by other processes.
> > >
> > > >
> > > > So the fix is correct, but justificaiton is lacking.
> > > >
> > > > Also, I would like to more fine-grained annotation: marking with
> > > > data_race() 200 bytes copy may hide other issues.
> > >
> > > That is the harder part where I don’t think we have anything for that today. Macro, any suggestions? ASSERT_IGNORE_FIELD()?
> >
> > There is no nice interface I can think of. All options will just cause
> > more problems, inconsistencies, or annoyances.
> >
> > Ideally, to not introduce more types of macros and keep it consistent,
> > ASSERT_EXCLUSIVE_FIELDS_EXCEPT(var, ...) maybe what you're after:
> > "Check no concurrent writers to struct, except ignore the provided
> > fields".
> >
> > This option doesn't quite work, unless you just restrict it to 1 field
> > (we can't use ranges, because e.g. vm_area_struct has
> > __randomize_layout). The next time around, you'll want 2 fields, and
> > it won't work. Also, do we know that 'shared.rb' is the only thing we
> > want to ignore?
> >
> > If you wanted something similar to ASSERT_EXCLUSIVE_BITS, it'd have to
> > be ASSERT_EXCLUSIVE_FIELDS(var, ...), however, this is quite annoying
> > for structs with many fields as you'd have to list all of them. It's
> > similar to what you can already do currently (but I also don't
> > recommend because it's unmaintainable):
> >
> > ASSERT_EXCLUSIVE_WRITER(orig->vm_start);
> > ASSERT_EXCLUSIVE_WRITER(orig->vm_end);
> > ASSERT_EXCLUSIVE_WRITER(orig->vm_next);
> > ASSERT_EXCLUSIVE_WRITER(orig->vm_prev);
> > ... and so on ...
> > *new = data_race(*orig);
> >
> > Also note that vm_area_struct has __randomize_layout, which makes
> > using ranges impossible. All in all, I don't see a terribly nice
> > option.
> >
> > If, however, you knew that there are 1 or 2 fields only that you want
> > to make sure are not modified concurrently, ASSERT_EXCLUSIVE_WRITER +
> > data_race() would probably work well (or even ASSERT_EXCLUSIVE_ACCESS
> > if you want to make sure there are no writers nor _readers_).
>
> I am testing an idea that just do,
>
> lockdep_assert_held_write(&orig->vm_mm->mmap_sem);
> *new = data_race(*orig);
>
> The idea is that as long as we have the exclusive mmap_sem held in all paths
> (auditing indicated so), no writer should be able to mess up our vm_area_struct
> except the "shared.rb" field which has no harm.
Well, some fields protected by page_table_lock and can be written to
without exclusive mmap_sem. Probably even without any mmap_sem: pin
mm_struct + page_table_lock should be enough.
--
Kirill A. Shutemov
Powered by blists - more mailing lists