lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0527135c-35f5-bc63-edb3-81cb03eb03f6@gmail.com>
Date:   Tue, 21 Dec 2021 13:24:00 +0100
From:   Rafał Miłecki <zajec5@...il.com>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Srinivas Kandagatla <srinivas.kandagatla@...aro.org>,
        "Rafael J . Wysocki" <rafael@...nel.org>,
        Jonathan Corbet <corbet@....net>,
        Daniel Vetter <daniel.vetter@...ll.ch>,
        Dan Williams <dan.j.williams@...el.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Krzysztof Wilczyński <kw@...ux.com>,
        Heiner Kallweit <hkallweit1@...il.com>,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        Rafał Miłecki <rafal@...ecki.pl>
Subject: Re: [PATCH 2/2] nvmem: expose NVMEM cells in sysfs

On 21.12.2021 08:13, Greg Kroah-Hartman wrote:
> On Tue, Dec 21, 2021 at 07:53:32AM +0100, Rafał Miłecki wrote:
>> On 21.12.2021 07:45, Greg Kroah-Hartman wrote:
>>> On Tue, Dec 21, 2021 at 07:39:24AM +0100, Rafał Miłecki wrote:
>>>> On 21.12.2021 07:33, Greg Kroah-Hartman wrote:
>>>>> On Mon, Dec 20, 2021 at 09:39:43PM +0100, Rafał Miłecki wrote:
>>>>>> Hi Greg,
>>>>>>
>>>>>> On 20.12.2021 09:00, Greg Kroah-Hartman wrote:
>>>>>>> On Mon, Dec 20, 2021 at 07:47:30AM +0100, Rafał Miłecki wrote:
>>>>>>>>      static void nvmem_cell_entry_add(struct nvmem_cell_entry *cell)
>>>>>>>>      {
>>>>>>>> +	struct device *dev = &cell->nvmem->dev;
>>>>>>>> +	int err;
>>>>>>>> +
>>>>>>>>      	mutex_lock(&nvmem_mutex);
>>>>>>>>      	list_add_tail(&cell->node, &cell->nvmem->cells);
>>>>>>>>      	mutex_unlock(&nvmem_mutex);
>>>>>>>> +
>>>>>>>> +	sysfs_attr_init(&cell->battr.attr);
>>>>>>>> +	cell->battr.attr.name = cell->name;
>>>>>>>> +	cell->battr.attr.mode = 0400;
>>>>>>>> +	cell->battr.read = nvmem_cell_attr_read;
>>>>>>>> +	err = sysfs_add_bin_file_to_group(&dev->kobj, &cell->battr,
>>>>>>>> +					  nvmem_cells_group.name);
>>>>>>>
>>>>>>> Why not just use the is_bin_visible attribute instead to determine if
>>>>>>> the attribute should be shown or not instead of having to add it
>>>>>>> after-the-fact which will race with userspace and loose?
>>>>>>
>>>>>> I'm sorry I really don't see how you suggest to get it done.
>>>>>>
>>>>>> I can use .is_bin_visible() callback indeed to respect nvmem->root_only.
>>>>>
>>>>> Great.
>>>>>
>>>>>> I don't understand addig-after-the-fact part. How is .is_bin_visible()
>>>>>> related to adding attributes for newly created cells?
>>>>>
>>>>> You are adding a sysfs attribute to a device that is already registered
>>>>> in the driver core, and so the creation of that attribute is never seen
>>>>> by userspace.  The attribute needs to be attached to the device _BEFORE_
>>>>> it is registered.
>>>>>
>>>>> Also, huge hint, if a driver has to call as sysfs_*() call, something is
>>>>> wrong.
>>>>>
>>>>>> Do you mean I can
>>>>>> avoid calling sysfs_add_bin_file_to_group()?
>>>>>
>>>>> Yes.
>>>>>
>>>>>> Do you recall any existing example of such solution?
>>>>>
>>>>> Loads.
>>>>>
>>>>> Just add this attribute group to your driver as a default attribute
>>>>> group and the driver core will create it for you if needed.
>>>>>
>>>>> Or if you always need it, no need to mess sith is_bin_visible() at all,
>>>>> I can't really understand what you are trying to do here at all.
>>>>
>>>> Thanks a lot! In nvmem_register() first there is a call to the
>>>> device_register() and only later cells get added. I suppose I just have
>>>> to rework nvmem_register() order so that:
>>>> 1. Cells are collected earlier. For each cell I allocate group attribute
>>>
>>> No, add all of the attributes to the device at the beginning before you
>>> register it, there's no need to allocate anything.
>>
>> If you mean static structures I can't do that, because cells almost
>> never are static. They are not known in advance. nvmem allows cells to
>> be:
>> 1. Specified in OF
>> 2. Submitted as list while registering a NVMEM device
>>
>> So every cells gets its own structure allocated dynamically. My plan is
>> to put bin_attribute in that struct and then create a group collecting
>> all those cells.
> 
> A device has a driver associated with it, and that driver has default
> groups associated with it.  Use that, I am not saying to use static
> structures, that is not how the driver model works at all.

I'm helpless on dealing with attributes.

I tried building a list of attributes dynamically but that of course
fails:

drivers/nvmem/core.c: In function ‘nvmem_register’:
drivers/nvmem/core.c:930:31: error: assignment of member ‘bin_attrs’ in read-only object
   930 |   nvmem_cells_group.bin_attrs = nvmem->cells_bin_attrs;
       |                               ^


What I'm trying to achieve is having
/sys/bus/nvmem/devices/*/cells/*
with each file being an attribute.


Please kindly point me to a single example of "struct attribute_group"
that has a variable list of attributes with each attribute having
runtime set name.

Almost all cases in kernel look like:
static const struct attribute_group foo_attr_group = {
	.attrs = foo_attrs,
};
with "foo_attrs" being a list of attributes with *predefined* names.

Every example of dynamic attributes (runtime created) I see in a kernel
(e.g. drivers/base/node.c) uses sysfs_*().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ