lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 2 Aug 2020 13:42:00 +0300
From:   Andy Shevchenko <>
To:     Bartosz Golaszewski <>
Cc:     Jonathan Cameron <>,
        Hartmut Knaack <>,
        Lars-Peter Clausen <>,
        Peter Meerwald-Stadler <>,
        Michal Simek <>,
        Greg Kroah-Hartman <>,
        Guenter Roeck <>,
        Andy Shevchenko <>,
        linux-iio <>,
        linux-arm Mailing List <>,
        Linux Kernel Mailing List <>,
        Bartosz Golaszewski <>
Subject: Re: [PATCH v6 1/3] devres: provide devm_krealloc()

On Sun, Aug 2, 2020 at 11:37 AM Bartosz Golaszewski <> wrote:
> From: Bartosz Golaszewski <>
> Implement the managed variant of krealloc(). This function works with
> all memory allocated by devm_kmalloc() (or devres functions using it
> implicitly like devm_kmemdup(), devm_kstrdup() etc.).
> Managed realloc'ed chunks can be manually released with devm_kfree().

Some thoughts below. You may ignore nit-picks, of course :-)


> + * Managed krealloc(). Resizes the memory chunk allocated with devm_kmalloc().
> + * Behaves similarly to regular krealloc(): if @ptr is NULL or ZERO_SIZE_PTR,

> + * it's the equivalent of devm_kmalloc(). If new_size is zero, it frees the

equivalent for

> + * previously allocated memory and returns ZERO_SIZE_PTR. This function doesn't
> + * change the order in which the release callback for the re-alloc'ed devres
> + * will be called (except when falling back to devm_kmalloc() or when freeing
> + * resources when new_size is zero). The contents of the memory are preserved
> + * up to the lesser of new and old sizes.

Might deserve to say about pointers to RO, but see below.


> +       if (WARN_ON(is_kernel_rodata((unsigned long)ptr)))
> +               /*
> +                * We cannot reliably realloc a const string returned by
> +                * devm_kstrdup_const().
> +                */
> +               return NULL;

I was thinking about this bit... Shouldn't we rather issue a simple
dev_warn() and return the existing pointer?
For example in some cases we might want to have resources coming
either from heap or from constant. Then, if at some circumstances we
would like to extend that memory (only for non-constant cases) we
would need to manage this ourselves. Otherwise we may simply call
It seems that devm_kstrdup_const returns an initial pointer. Getting
NULL is kinda inconvenient (and actually dev_warn() might also be
quite a noise, however I would give a message to the user, because
it's something worth checking).


> +       spin_lock_irqsave(&dev->devres_lock, flags);
> +       old_dr = find_dr(dev, devm_kmalloc_release, devm_kmalloc_match, ptr);
> +       spin_unlock_irqrestore(&dev->devres_lock, flags);

> +       if (!old_dr) {

I would have this under spin lock b/c of below.

> +               WARN(1, "Memory chunk not managed or managed by a different device.");
> +               return NULL;
> +       }

> +       old_head = old_dr->node.entry;

This would be still better to be under spin lock.

> +       new_dr = krealloc(old_dr, total_size, gfp);
> +       if (!new_dr)
> +               return NULL;

And perhaps spin lock taken already here.

> +       if (new_dr != old_dr) {
> +               spin_lock_irqsave(&dev->devres_lock, flags);
> +               list_replace(&old_head, &new_dr->node.entry);
> +               spin_unlock_irqrestore(&dev->devres_lock, flags);
> +       }

Yes, I understand that covering more code under spin lock does not fix
any potential race, but at least it minimizes scope of the code that
is not under it to see exactly what is problematic.

I probably will think more about a better approach to avoid potential races.

With Best Regards,
Andy Shevchenko

Powered by blists - more mailing lists