lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6a7d0a7-d8a9-7253-7c9b-40b206e8516a@linaro.org>
Date:   Thu, 8 Jun 2017 07:26:15 +0100
From:   Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
To:     Heiner Kallweit <hkallweit1@...il.com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] nvmem: core: add managed version of nvmem_register



On 07/06/17 22:55, Heiner Kallweit wrote:
> Am 07.06.2017 um 18:19 schrieb Srinivas Kandagatla:
>>
>> On 04/06/17 12:06, Heiner Kallweit wrote:
>>> Add a device-managed version of nvmem_register.
>>>
>>> Signed-off-by: Heiner Kallweit <hkallweit1@...il.com>
>>> ---
>>>   Documentation/nvmem/nvmem.txt  |  1 +
>>>   drivers/nvmem/core.c           | 35 +++++++++++++++++++++++++++++++++++
>>>   include/linux/nvmem-provider.h |  7 +++++++
>>>   3 files changed, 43 insertions(+)
>>>
>>
>> Thanks for the patch, one comments..
>>> diff --git a/Documentation/nvmem/nvmem.txt b/Documentation/nvmem/nvmem.txt
>>> index dbd40d87..b4ff7862 100644
>>> --- a/Documentation/nvmem/nvmem.txt
>>> +++ b/Documentation/nvmem/nvmem.txt
>>> @@ -37,6 +37,7 @@ and write the non-volatile memory.
>>>   A NVMEM provider can register with NVMEM core by supplying relevant
>>>   nvmem configuration to nvmem_register(), on success core would return a valid
>>>   nvmem_device pointer.
>>> +devm_nvmem_register() is a device-managed version of nvmem_register.
>>>
>>>   nvmem_unregister(nvmem) is used to unregister a previously registered provider.
>>>
>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>>> index 783eb431..55db219f 100644
>>> --- a/drivers/nvmem/core.c
>>> +++ b/drivers/nvmem/core.c
>>> @@ -531,6 +531,41 @@ int nvmem_unregister(struct nvmem_device *nvmem)
>>>   }
>>>   EXPORT_SYMBOL_GPL(nvmem_unregister);
>>>
>>> +static void devm_nvmem_release(struct device *dev, void *res)
>>> +{
>>> +    nvmem_unregister(*(struct nvmem_device **)res);
>>
>> nvmem_unregister() can fail, how are you going to deal with this error cases?
>>
> As stated in my answer to your other review comment:
> Currently no caller of nvmem_unregister checks the return code.
Currently all nvmem provider drivers check return code of unregister in 
remove path.
> Checking the refcount I see more as a debug feature and I think making

No, I don't think its a debug feature!! without this check we would end 
up dereferencing a freed pointer.
> nvmem_unregister return void plus a WARN() if refcount != 0 would be better.


WARN() would not be enough to stop the system to crash, in case the 
provider just unregisters with active users.
> 
> Rgds, Heiner
> 
>>
>>> +}
>>> +
>>> +/**
>>> + * devm_nvmem_register() - managed version of nvmem_register
>>> + *
>>> + * @config: nvmem device configuration with which nvmem device is created.
>>> + *
>>> + * Return: Will be an ERR_PTR() on error or a valid pointer to nvmem_device
>>> + * on success.
>>> + */
>>> +
>>> +struct nvmem_device *devm_nvmem_register(const struct nvmem_config *config)
>>
>> For consistency reasons, devm versions of apis should always have dev at as first argument.
>>
>>> +{
>>> +    struct nvmem_device *nv, **dr;
>>> +
>>> +    dr = devres_alloc(devm_nvmem_release, sizeof(*dr), GFP_KERNEL);
>>> +    if (!dr)
>>> +        return ERR_PTR(-ENOMEM);
>>> +
>>> +    nv = nvmem_register(config);
>>> +    if (IS_ERR(nv)) {
>>> +        devres_free(dr);
>>> +        return nv;
>>> +    }
>>> +
>>> +    *dr = nv;
>>> +    devres_add(config->dev, dr);
>>> +
>>> +    return nv;
>>> +}
>>> +EXPORT_SYMBOL_GPL(devm_nvmem_register);
>>> +
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ