lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4iqA84WbE8pE=Yi15Q0T8JUXnPZss27HqhQTP08F9sTrQ@mail.gmail.com>
Date:   Fri, 30 Nov 2018 12:05:13 -0800
From:   Dan Williams <dan.j.williams@...el.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-nvdimm <linux-nvdimm@...ts.01.org>, Jan Kara <jack@...e.cz>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] dax: Fix Xarray conversion of dax_unlock_mapping_entry()

On Fri, Nov 30, 2018 at 11:50 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Fri, Nov 30, 2018 at 09:01:07AM -0800, Dan Williams wrote:
> > On Fri, Nov 30, 2018 at 8:33 AM Dan Williams <dan.j.williams@...el.com> wrote:
> > >
> > > On Fri, Nov 30, 2018 at 8:24 AM Matthew Wilcox <willy@...radead.org> wrote:
> > > >
> > > > On Fri, Nov 30, 2018 at 07:54:49AM -0800, Dan Williams wrote:
> > > > > Looks good to me, although can we make that cookie an actual type? I
> > > > > think it's mostly ok to pass around (void *) for 'entry' inside of
> > > > > fs/dax.c, but once an entry leaves that file I'd like it to have an
> > > > > explicit type to catch people that might accidentally pass a (struct
> > > > > page *) to the unlock routine.
> > > >
> > > > That's a really good idea.  Something like this?
> > > >
> > > > typedef struct {
> > > >         void *v;
> > > > } dax_entry_t;
> > >
> > > Yes, please.
>
> Oh.  The caller needs to interpret it to see if the entry was successfully
> locked, so it needs to be an integer type (or we need a comparison
> function ... bleh).
>
> > > > I could see us making good use of that within dax.c.
> >
> > I'm now thinking that this is a nice improvement for 4.21. For 4.20-rc
> > lets do the localized fix.
>
> I think both patches are equally risky.  I admit this patch crosses a
> file boundary, but the other patch changes dax_make_entry() which is
> used by several functions which aren't part of this path, whereas this
> patch only changes functions used in the path which is known to be buggy.

I'm almost buying this argument... but I'd feel better knowing that
all dax_make_entry() usages are safe against this bug pattern. I
didn't audit the other occurrences of dax_make_entry() for this bug,
did you build some confidence here?

>
> This patch has the advantage of getting us closer to where we want to be
> sooner.

One comment below...

>
>
> From 1135b8d08f23ab4f5b28261535a817f3de9297c9 Mon Sep 17 00:00:00 2001
> From: Matthew Wilcox <willy@...radead.org>
> Date: Fri, 30 Nov 2018 11:05:06 -0500
> Subject: [PATCH] dax: Change lock/unlock API
>
> Return the unlock cookie from dax_lock_mapping_entry() and
> pass it to dax_unlock_mapping_entry().  This fixes a bug where
> dax_unlock_mapping_entry() was assuming that the page was PMD-aligned
> if the entry was a PMD entry.
>
> Debugged-by: Dan Williams <dan.j.williams@...el.com>
> Fixes: 9f32d221301c ("dax: Convert dax_lock_mapping_entry to XArray")
> Signed-off-by: Matthew Wilcox <willy@...radead.org>
> ---
>  fs/dax.c            | 21 ++++++++-------------
>  include/linux/dax.h | 15 +++++++++------
>  mm/memory-failure.c |  6 ++++--
>  3 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 9bcce89ea18e..d2c04e802978 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -351,20 +351,20 @@ static struct page *dax_busy_page(void *entry)
>   * @page: The page whose entry we want to lock
>   *
>   * Context: Process context.
> - * Return: %true if the entry was locked or does not need to be locked.
> + * Return: A cookie to pass to dax_unlock_mapping_entry() or 0 if the
> + * entry could not be locked.
>   */
> -bool dax_lock_mapping_entry(struct page *page)
> +dax_entry_t dax_lock_mapping_entry(struct page *page)
>  {
>         XA_STATE(xas, NULL, 0);
>         void *entry;
> -       bool locked;
>
>         /* Ensure page->mapping isn't freed while we look at it */
>         rcu_read_lock();
>         for (;;) {
>                 struct address_space *mapping = READ_ONCE(page->mapping);
>
> -               locked = false;
> +               entry = NULL;
>                 if (!dax_mapping(mapping))
>                         break;
>
> @@ -375,7 +375,7 @@ bool dax_lock_mapping_entry(struct page *page)
>                  * otherwise we would not have a valid pfn_to_page()
>                  * translation.
>                  */
> -               locked = true;
> +               entry = (void *)~0UL;
>                 if (S_ISCHR(mapping->host->i_mode))
>                         break;
>
> @@ -400,23 +400,18 @@ bool dax_lock_mapping_entry(struct page *page)
>                 break;
>         }
>         rcu_read_unlock();
> -       return locked;
> +       return (dax_entry_t)entry;
>  }
>
> -void dax_unlock_mapping_entry(struct page *page)
> +void dax_unlock_mapping_entry(struct page *page, dax_entry_t entry)

Let's not require the page to be passed back, it can be derived:

    page = pfn_to_page(dax_to_pfn((void*) entry));

A bit more symmetric that way and canonical with other locking schemes
that return a cookie.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ