[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1581345072.7365.30.camel@lca.pw>
Date: Mon, 10 Feb 2020 09:31:12 -0500
From: Qian Cai <cai@....pw>
To: Marco Elver <elver@...gle.com>
Cc: John Hubbard <jhubbard@...dia.com>, Jan Kara <jack@...e.cz>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, ira.weiny@...el.com,
Dan Williams <dan.j.williams@...el.com>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
kasan-dev <kasan-dev@...glegroups.com>
Subject: Re: [PATCH] mm: fix a data race in put_page()
On Mon, 2020-02-10 at 15:12 +0100, Marco Elver wrote:
> On Mon, 10 Feb 2020 at 14:55, Qian Cai <cai@....pw> wrote:
> >
> > On Mon, 2020-02-10 at 14:38 +0100, Marco Elver wrote:
> > > On Mon, 10 Feb 2020 at 14:36, Qian Cai <cai@....pw> wrote:
> > > >
> > > > On Mon, 2020-02-10 at 13:58 +0100, Marco Elver wrote:
> > > > > On Mon, 10 Feb 2020 at 13:16, Qian Cai <cai@....pw> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > On Feb 10, 2020, at 2:48 AM, Marco Elver <elver@...gle.com> wrote:
> > > > > > >
> > > > > > > Here is an alternative:
> > > > > > >
> > > > > > > Let's say KCSAN gives you this:
> > > > > > > /* ... Assert that the bits set in mask are not written
> > > > > > > concurrently; they may still be read concurrently.
> > > > > > > The access that immediately follows is assumed to access those
> > > > > > > bits and safe w.r.t. data races.
> > > > > > >
> > > > > > > For example, this may be used when certain bits of @flags may
> > > > > > > only be modified when holding the appropriate lock,
> > > > > > > but other bits may still be modified locklessly.
> > > > > > > ...
> > > > > > > */
> > > > > > > #define ASSERT_EXCLUSIVE_BITS(flags, mask) ....
> > > > > > >
> > > > > > > Then we can write page_zonenum as follows:
> > > > > > >
> > > > > > > static inline enum zone_type page_zonenum(const struct page *page)
> > > > > > > {
> > > > > > > + ASSERT_EXCLUSIVE_BITS(page->flags, ZONES_MASK << ZONES_PGSHIFT);
> > > > > > > return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK;
> > > > > > > }
> > > > > > >
> > > > > > > This will accomplish the following:
> > > > > > > 1. The current code is not touched, and we do not have to verify that
> > > > > > > the change is correct without KCSAN.
> > > > > > > 2. We're not introducing a bunch of special macros to read bits in various ways.
> > > > > > > 3. KCSAN will assume that the access is safe, and no data race report
> > > > > > > is generated.
> > > > > > > 4. If somebody modifies ZONES bits concurrently, KCSAN will tell you
> > > > > > > about the race.
> > > > > > > 5. We're documenting the code.
> > > > > > >
> > > > > > > Anything I missed?
> > > > > >
> > > > > > I don’t know. Having to write the same line twice does not feel me any better than data_race() with commenting occasionally.
> > > > >
> > > > > Point 4 above: While data_race() will ignore cause KCSAN to not report
> > > > > the data race, now you might be missing a real bug: if somebody
> > > > > concurrently modifies the bits accessed, you want to know about it!
> > > > > Either way, it's up to you to add the ASSERT_EXCLUSIVE_BITS, but just
> > > > > remember that if you decide to silence it with data_race(), you need
> > > > > to be sure there are no concurrent writers to those bits.
> > > >
> > > > Right, in this case, there is no concurrent writers to those bits, so I'll add a
> > > > comment should be sufficient. However, I'll keep ASSERT_EXCLUSIVE_BITS() in mind
> > > > for other places.
> > >
> > > Right now there are no concurrent writers to those bits. But somebody
> > > might introduce a bug that will write them, even though they shouldn't
> > > have. With ASSERT_EXCLUSIVE_BITS() you can catch that. Once I have the
> > > patches for this out, I would consider adding it here for this reason.
> >
> > Surely, we could add many of those to catch theoretical issues. I can think of
> > more like ASSERT_HARMLESS_COUNTERS() because the worry about one day someone
> > might change the code to use counters from printing out information to making
> > important MM heuristic decisions. Then, we might end up with those too many
> > macros situation again. The list goes on, ASSERT_COMPARE_ZERO_NOLOOP(),
> > ASSERT_SINGLE_BIT() etc.
>
> I'm sorry, but the above don't assert any quantifiable properties in the code.
>
> What we want is to be able to catch bugs that violate the *current*
> properties of the code *today*. A very real property of the code
> *today* is that nobody should modify zonenum without taking a lock. If
> you mark the access here, there is no tool that can help you. I'm
> trying to change that.
>
> The fact that we have bits that can be modified locklessly and some
> that can't is an inconvenience, but can be solved.
>
> Makes sense?
OK, go ahead adding it if you really feel like. I'd hope this is not the
Pandora's box where people will eventually find more way to assert quantifiable
properties in the code only to address theoretical issues...
>
> Thanks,
> -- Marco
>
> > On the other hand, maybe to take a more pragmatic approach that if there are
> > strong evidences that developers could easily make mistakes in a certain place,
> > then we could add a new macro, so the next time Joe developer wants to a new
> > macro, he/she has to provide the same strong justifications?
> >
> > >
> > > > >
> > > > > There is no way to automatically infer all over the kernel which bits
> > > > > we care about, and the most reliable is to be explicit about it. I
> > > > > don't see a problem with it per se.
> > > > >
> > > > > Thanks,
> > > > > -- Marco
Powered by blists - more mailing lists