lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Aug 2020 22:02:05 +0200
From:   Bartosz Golaszewski <brgl@...ev.pl>
To:     Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc:     Jonathan Cameron <jic23@...nel.org>,
        Hartmut Knaack <knaack.h@....de>,
        Lars-Peter Clausen <lars@...afoo.de>,
        Peter Meerwald-Stadler <pmeerw@...erw.net>,
        Michal Simek <michal.simek@...inx.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Guenter Roeck <linux@...ck-us.net>,
        linux-iio <linux-iio@...r.kernel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Bartosz Golaszewski <bgolaszewski@...libre.com>
Subject: Re: [PATCH v7 1/3] devres: provide devm_krealloc()

On Mon, Aug 17, 2020 at 7:43 PM Andy Shevchenko
<andriy.shevchenko@...ux.intel.com> wrote:
>
> On Mon, Aug 17, 2020 at 07:05:33PM +0200, Bartosz Golaszewski wrote:
> > From: Bartosz Golaszewski <bgolaszewski@...libre.com>
> >
> > Implement the managed variant of krealloc(). This function works with
> > all memory allocated by devm_kmalloc() (or devres functions using it
> > implicitly like devm_kmemdup(), devm_kstrdup() etc.).
> >
> > Managed realloc'ed chunks can be manually released with devm_kfree().
>
> Thanks for an update! My comments / questions below.
>
> ...
>
> > +static struct devres *to_devres(void *data)
> > +{
> > +     return (struct devres *)((u8 *)data - ALIGN(sizeof(struct devres),
> > +                                                 ARCH_KMALLOC_MINALIGN));
>
> Do you really need both explicit castings?
>

Yeah, we can probably drop the (struct devres *) here.

> > +}
>
> ...
>
> > +     total_old_size = ksize(to_devres(ptr));
>
> But how you can guarantee this pointer:
>  - belongs to devres,

We can only check if a chunk is dynamically allocated with ksize() -
it will return 0 if it isn't and I'll add a check for that in the next
iteration. We check whether it's a managed chunk later after taking
the lock.

>  - hasn't gone while you run a ksize()?
>

At some point you need to draw a line. In the end: how do you
guarantee a devres buffer hasn't been freed when you're using it? In
my comment to the previous version of this patch I clarified that we
need to protect all modifications of the devres linked list - we must
not realloc a chunk that contains the links without taking the
spinlock but also we must not call alloc() funcs with GFP_KERNEL with
spinlock taken. The issue we could run into is: someone modifies the
linked list by adding/removing other managed resources, not modifying
this one.

The way this function works now guarantees it but other than that:
it's up to the users to not free memory they're actively using.

> ...
>
> > +     new_dr = alloc_dr(devm_kmalloc_release,
> > +                       total_new_size, gfp, dev_to_node(dev));
>
> Can you move some parameters to the previous line?
>

Why though? It's fine this way.

> > +     if (!new_dr)
> > +             return NULL;
>
> ...
>
> > +     spin_lock_irqsave(&dev->devres_lock, flags);
> > +
> > +     old_dr = find_dr(dev, devm_kmalloc_release, devm_kmalloc_match, ptr);
> > +     if (!old_dr) {
> > +             spin_unlock_irqrestore(&dev->devres_lock, flags);
> > +             devres_free(new_dr);
> > +             WARN(1, "Memory chunk not managed or managed by a different device.");
> > +             return NULL;
> > +     }
> > +
> > +     replace_dr(dev, &old_dr->node, &new_dr->node);
> > +
> > +     spin_unlock_irqrestore(&dev->devres_lock, flags);
> > +
> > +     memcpy(new_dr->data, old_dr->data, devres_data_size(total_old_size));
>
> But new_dr may concurrently gone at this point, no? It means memcpy() should be
> under spin lock.
>

Just as I explained above: we're protecting the linked list, not the
resource itself.

Bartosz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ