lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a74acb94090555b96702de7a15f7dedf@codeaurora.org>
Date:   Thu, 19 Apr 2018 18:42:20 +0800
From:   yuankuiz@...eaurora.org
To:     Julia Lawall <julia.lawall@...6.fr>
Cc:     Joe Perches <joe@...ches.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Andy Whitcroft <apw@...onical.com>,
        Linux PM <linux-pm@...r.kernel.org>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        paulmck@...ux.vnet.ibm.com, Ingo Molnar <mingo@...nel.org>,
        Len Brown <len.brown@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-pm-owner@...r.kernel.org
Subject: Re: [PATCH] checkpatch: Add a --strict test for structs with bool
 member definitions

On 2018-04-19 02:48 PM, yuankuiz@...eaurora.org wrote:
> On 2018-04-19 01:16 PM, Julia Lawall wrote:
>> On Wed, 18 Apr 2018, Joe Perches wrote:
>> 
>>> On Thu, 2018-04-19 at 06:40 +0200, Julia Lawall wrote:
>>> >
>>> > On Wed, 18 Apr 2018, Joe Perches wrote:
>>> >
>>> > > On Tue, 2018-04-17 at 17:07 +0800, yuankuiz@...eaurora.org wrote:
>>> > > > Hi julia,
>>> > > >
>>> > > > On 2018-04-15 05:19 AM, Julia Lawall wrote:
>>> > > > > On Wed, 11 Apr 2018, Joe Perches wrote:
>>> > > > >
>>> > > > > > On Thu, 2018-04-12 at 08:22 +0200, Julia Lawall wrote:
>>> > > > > > > On Wed, 11 Apr 2018, Joe Perches wrote:
>>> > > > > > > > On Wed, 2018-04-11 at 09:29 -0700, Andrew Morton wrote:
>>> > > > > > > > > We already have some 500 bools-in-structs
>>> > > > > > > >
>>> > > > > > > > I got at least triple that only in include/
>>> > > > > > > > so I expect there are at probably an order
>>> > > > > > > > of magnitude more than 500 in the kernel.
>>> > > > > > > >
>>> > > > > > > > I suppose some cocci script could count the
>>> > > > > > > > actual number of instances.  A regex can not.
>>> > > > > > >
>>> > > > > > > I got 12667.
>>> > > > > >
>>> > > > > > Could you please post the cocci script?
>>> > > > > >
>>> > > > > > > I'm not sure to understand the issue.  Will using a bitfield help if there
>>> > > > > > > are no other bitfields in the structure?
>>> > > > > >
>>> > > > > > IMO, not really.
>>> > > > > >
>>> > > > > > The primary issue is described by Linus here:
>>> > > > > > https://lkml.org/lkml/2017/11/21/384
>>> > > > > >
>>> > > > > > I personally do not find a significant issue with
>>> > > > > > uncontrolled sizes of bool in kernel structs as
>>> > > > > > all of the kernel structs are transitory and not
>>> > > > > > written out to storage.
>>> > > > > >
>>> > > > > > I suppose bool bitfields are also OK, but for the
>>> > > > > > RMW required.
>>> > > > > >
>>> > > > > > Using unsigned int :1 bitfield instead of bool :1
>>> > > > > > has the negative of truncation so that the uint
>>> > > > > > has to be set with !! instead of a simple assign.
>>> > > > >
>>> > > > > At least with gcc 5.4.0, a number of structures become larger with
>>> > > > > unsigned int :1. bool:1 seems to mostly solve this problem.  The
>>> > > > > structure
>>> > > > > ichx_desc, defined in drivers/gpio/gpio-ich.c seems to become larger
>>> > > > > with
>>> > > > > both approaches.
>>> > > >
>>> > > > [ZJ] Hopefully, this could make it better in your environment.
>>> > > >       IMHO, this is just for double check.
>>> > >
>>> > > I doubt this is actually better or smaller code.
>>> > >
>>> > > Check the actual object code using objdump and the
>>> > > struct alignment using pahole.
>>> >
>>> > I didn't have a chance to try it, but it looks quite likely to result in a
>>> > smaller data structure based on the other examples that I looked at.
>>> 
>>> I _really_ doubt there is any difference in size between the
>>> below in any architecture
>>> 
>>> struct foo {
>>> 	int bar;
>>> 	bool baz:1;
>>> 	int qux;
>>> };
>>> 
>>> and
>>> 
>>> struct foo {
>>> 	int bar;
>>> 	bool baz;
>>> 	int qux;
>>> };
>>> 
>>> Where there would be a difference in size is
>>> 
>>> struct foo {
>>> 	int bar;
>>> 	bool baz1:1;
>>> 	bool baz2:1;
>>> 	int qux;
>>> };
>>> 
>>> and
>>> 
>>> struct foo {
>>> 	int bar;
>>> 	bool baz1;
>>> 	bool baz2;
>>> 
>>> int qux;
>>> };
[ZJ] Even though, two bool:1 are grouped in the #3, finally 4 bytes are 
padded
      due for int is the most significant in the type size.
      At least, they are all the same per x86 and arm based on gcc.(12 
bytes).
>> 
>> In the situation of the example there are two bools together in the 
>> middle
>> of the structure and one at the end.  Somehow, even converting to 
>> bool:1
>> increases the size.  But it seems plausible that putting all three 
>> bools
>> together and converting them all to :1 would reduce the size.  I don't
>> know.  The size increase (more than 8 bytes) seems out of proportion 
>> for 3
>> bools.
> [ZJ] Typically, addition saving is due for difference padding.
>> 
>> I was able to check around 3000 structures that were not declared with 
>> any
>> attributes, that don't declare named types internally, and that are
>> compiled for x86.  Around 10% become smaller whn using bool:1, 
>> typically
>> by at most 8 bytes.
[ZJ] As my example, int (*)() requested 8 bytes in x86 arch, then 8 
bytes is similiar to that.
      While it request 4 bytes in arm arch. Typically, my previous 
example struct can
      reach to 32 bytes in x86 arch(compared to 40 bytes for original 
version).
      Similarly, 20 bytes in arm arch(compared to 24 bytes per original 
version).
>> 
>> julia
>> 
>>> 
>>> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ