[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10c4663b-dd65-a545-786d-10aed6e6e5e9@fi.rohmeurope.com>
Date: Tue, 11 Oct 2022 06:56:17 +0000
From: "Vaittinen, Matti" <Matti.Vaittinen@...rohmeurope.com>
To: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
CC: Matti Vaittinen <mazziesaccount@...il.com>,
Jonathan Cameron <jic23@...nel.org>,
Lars-Peter Clausen <lars@...afoo.de>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Jagath Jog J <jagathjog1996@...il.com>,
Nikita Yushchenko <nikita.yoush@...entembedded.com>,
Cosmin Tanislav <demonsingur@...il.com>,
"linux-iio@...r.kernel.org" <linux-iio@...r.kernel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 4/5] iio: accel: Support Kionix/ROHM KX022A
accelerometer
On 10/10/22 17:09, Andy Shevchenko wrote:
> On Mon, Oct 10, 2022 at 01:20:23PM +0000, Vaittinen, Matti wrote:
>> On 10/10/22 14:58, Andy Shevchenko wrote:
>>> On Mon, Oct 10, 2022 at 12:12:34PM +0300, Matti Vaittinen wrote:
>>>> On 10/6/22 21:32, Andy Shevchenko wrote:
>>>>> On Thu, Oct 06, 2022 at 05:38:14PM +0300, Matti Vaittinen wrote:
>
> (Note, everything I have dropped without comment can be treated
> as "yes, I agree on your points". Same applies to the early or
> the future replies.)
>
> ...
>
>>>>> Also what will be the module name (see other Kconfig for the pattern)?
>>>>
>>>> Here the pattern is IMHO improved by cutting the useless text. Even the
>>>> cross-reference from SPI to I2C and I2C to SPI driver which was in v1 is
>>>> more useful. Module autoloading for SPI should just work. Adding the module
>>>> name here seems just a silly habit - which I see many people are breaking.
>>>> If module name is to be needed, then it should be checked from the Makefile
>>>> and not trusted that Kconfig text stays up-to-date...
>>>
>>> I think the module name is a good thing to have for a user who might be
>>> not knowing about Makefile:s and how to derive the module name from that
>>> (sometimes it's not so obvious).
>>
>> My point is that average users do not really even need to know the
>> module names. For most simple drivers the module name is just the base
>> of the filename. I really doubt there are that many people who
>>
>> a) do need to know module name
>
> Everyone? When I need to debug the issue (assisted debug or doing myself) it is
> often good to know:
>
> 1) is the driver built as a module;
> 2) is that module ever loaded;
> 3) how to find initial module parameters if any
> (see /sys/module/$NAME/parameters).
>
Fair enough. Next question is that how often do you launch kernel
configuration tool and try finding the correct config option when you
need to know the module name? My gut feeling is that average users do
not seek this information using config tools. (In my opinion) config is
just a wrong place for this information. When you need to debug a module
you should not be using the kernel config tool to find debugging
guidance. It is just a wrong place. The config tool is for kernel
configuration - and at that phase the module name does not matter. At
that phase you want to know what support the option gives to you and if
you should use it on your system. Name of a module is completely irrelevant.
>> b) read config menu entries but don't know how to find a module name
>> from Makefile
>
> Isn't it orthogonal? Average user may easily choose everything in the kernel
> configuration via `make menuconfig` because it _is_ user interface / tool.
> Makefile simply is not for the users.
Average users use distribution kernels. Some users may compile own
kernel - but use config from previous kernel. Some users use "more
exotic" boards - and use defconfigs for those. It is really just a
handful of people who do read/change a config option for an
accelerometer, PMIC, RTC, GPIO-controller or any other such single HW
component. And I am pretty sure that those who do, do also understand
the syntax of a Makefile.
And still, the moment of doing kernel configuration is not the moment
when you need the module name.
>> c) want to know module name and can't pair the name from lsmod to module
>
> It's reversed mapping we need.
>
>> d) rather look up the config entry than browse the contents of
>> lib/modules/...
>
> So, I have a module called 'foo-bar.ko' and lsmod shows me 'baz-hell'.
> How should I know that mapping to begin with?
>
>> It'd be nice to know if anyone reading this mail has actually ever used
>> the module name information placed in Kconfig entry for _anything else_
>> except for making the Kconfig entry to look a bit more fatty...
>
> That said, I haven't found any good justification from your side to avoid
> including the module name, while I see the advantages of otherwise.
The disadvantage is systematically polluting the config entries with
irrelevant (to configuration) information. We should focus more on the
question if the config entry explains user whether he should or should
not enable the piece of config on his system. IMNAHEO (In My Not Always
Humble Enough Opinion) all the rest belongs to some other documentation.
> ...
>
>>>>>> +struct kx022a_data {
>>>>>> + int irq;
>>>>>> + int inc_reg;
>>>>>> + int ien_reg;
>>>>>
>>>>>> + struct regmap *regmap;
>>>>>
>>>>> Putting this as a first member might improve code by making pointer arithmetics
>>>>> better. Have you checked possibilities with bloat-o-meter?
>>>>
>>>> No. Amount of combinations is just too huge for me to try randomly
>>>> reordering structs and running bloat-o-meter. Can you please explain me why
>>>> you think reordering this would have visible impact and if this is something
>>>> that is universally true or if such an optimization is architecture
>>>> dependent?
>>>
>>> Usually regmap pointer is used more often than, for instance, the 'irq' member,
>>> and I believe that moving 'irq' member deeper will be harmless, while moving
>>> regmap might be winning.
>>
>> So you suggest us to try optimizing the code by placing a pointer to
>> more frequently used member(s) at the top of the struct? I didn't know
>> it would have any impact. I understand arranging the members of
>> frequently copied structs based on sizes so that we can avoid eating
>> space by padding - but I had never heard of things being improved by
>> ordering based on frequency of use. It'd be great to understand the
>> mechanism this optimization is based on?
>
> It's based on the pointer arithmetics. In some cases it might be a win, in some
> actually no advantage, that's why it's hard to say just by looking at the code.
>
> Most affected cases are for the embedded data structures, where container_of()
> is used on top,
Oh. Do you mean that in some cases when CPU needs some information
frequently, it should be at the top of a struct so that no offset needs
to be added when information is obtained using a pointer to the struct?
Eg. Adding offset to the pointer (pointer arithmetics) is not done when
first member of a struct is obtained - resulting maybe one instruction
less. So having the most frequently used data as first member of a
struct can be beneficial? I assume that if offset needs to be added,
then it does not really matter how big the offset is - although
naturally the size of the (offset) struct matters when we consider how
data fits in cache.
Okay, cool thing to keep in mind when optimizing some really performance
critical code in a fixed environment - or when dealing with a set of
instructions copied to many places. I am afraid that for a driver
intended to be used in different systems the manual optimization results
are not really that good. Also, bloat-o-meter results on kernel compiled
with my specific optimization settings and compiler would probably not
be universally true. (At the moment I do develop the driver using pretty
ancient version of a 32-bit arm compiler). (I don't really know the
bloat-o-meter but my understanding is that it compares the compiled
binaries to find out the differencies)
> esp. if most used member of those structures are also appeared
> first. Often example is:
>
> struct foo {
> struct list_head node;
> ...
> };
>
> vs.
>
> struct foo {
> ...
> struct list_head node;
> };
>
> where you may even see by reading the code that list_for_each_entry()
> will be shorter.
>
> With pointers it's less visible, but still might have a difference.
>
> ...
>
>>>>>> +
>>>>>> + while (n-- > 0)
>>>>>
>>>>> while (n--) ? Or why the 0 is ignored?
>
> So, it appears that yours and mine are equivalents, but mine is shorter :-)
Yes. Besides, I actually like looks of while (n--) more than while (n--
> 0). I think while (n-- > 0) has originated from the code where I
stole the logic :)
>
>>>>>> + if (en)
>>>>>> + return regmap_set_bits(data->regmap, KX022A_REG_CNTL,
>>>>>> + KX022A_MASK_DRDY);
>>>>>
>>>>> I would put redundant 'else' here to have them on the same column, but
>>>>> it's pity we don't have regmap_assign_bits() API (or whatever name you
>>>>> can come up with) to hide this kind of code.
>>>>
>>>> Eh, you mean you would put else here even though we return from the if? And
>>>> then put another return in else, and no return outside the if/else?
>>>>
>>>> I definitely don't like the idea.
>>>>
>>>> We could probably use regmap_update_bits and ternary but in my opinion that
>>>> would be just much less obvious. I do like the use of set/clear bits which
>>>> also makes the meaning / "polarity" of bits really clear.
>>>
>>> The idea behind is simple (and note that I'm usually on the side of _removing_
>>> redundancy)
>>>
>>> if (en)
>>> return regmap_set_bits(data->regmap, KX022A_REG_CNTL,
>>> KX022A_MASK_DRDY);
>>> else
>>> return regmap_clear_bits(data->regmap, KX022A_REG_CNTL,
>>> ...
>>>
>>> Because the branches are semantically tighten to each other. But it's not
>>> a big deal to me.
>>
>> What I do not really like in above example is that we never reach the
>> end of function body.
>
> What do you mean by that? Compiler does warn or what?
I don't know if compiler warns about it as I didn't test it. Now that
you mentioned it, I think I have seen such warnings from a compiler or
some static analyzer (klocwork?) in the past though. What I mean is that:
int foo()
{
if () {
...
return 1;
} else {
...
return 2;
}
}
construct makes mistakes like:
int foo()
{
if () {
...
return 1;
} else {
...
return 2;
}
...
return 0;
}
easy to make. When one uses:
int foo()
{
if () {
...
return 1;
}
...
return 2;
}
to begin with there is zero room for such confusion.
>
>> It is against the expectation - and adding the
>> else has no real meaning when if returns. I actually think that
>> checkpatch could even warn about that construct.
>
> No way we ever accept such a thing for checkpatch because it's subjective
Eh. Are you telling me that there is no subjective stuff in checkpatch? ;)
> (very dependant on the code piece). OTOH the proposed pattern is used in
> many places and left like that in places where I cleaned up the 'else',
> to leave the semantic tights with the above lines).
>
>>>>>> + return regmap_clear_bits(data->regmap, KX022A_REG_CNTL,
>>>>>> + KX022A_MASK_DRDY);
>
> I see that we have a strong disagreement here. I leave it to maintainers.
Okay, let's hear what others have to say here.
Thanks for all the input this far.
Best Regards
-- Matti
--
Matti Vaittinen
Linux kernel developer at ROHM Semiconductors
Oulu Finland
~~ When things go utterly wrong vim users can always type :help! ~~
Powered by blists - more mailing lists