lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whn+F+vFgCR0a04atTQuBmXhq0oU1Y1f0YqMiUFHj28JQ@mail.gmail.com>
Date:   Tue, 26 Feb 2019 09:30:30 -0800
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        "Chen, Rong A" <rong.a.chen@...el.com>, "lkp@...org" <lkp@...org>,
        LKML <linux-kernel@...r.kernel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Waiman Long <longman@...hat.com>,
        Tim C Chen <tim.c.chen@...el.com>
Subject: Re: [LKP] [page cache] eb797a8ee0: vm-scalability.throughput -16.5% regression

On Tue, Feb 26, 2019 at 12:17 AM Huang, Ying <ying.huang@...el.com> wrote:
>
> As for fixing.  Should we care about the cache line alignment of struct
> inode?  Or its size is considered more important because there may be a
> huge number of struct inode in the system?

Thanks for the great analysis.

I suspect we _would_ like to make sure inodes are as small as
possible, since they are everywhere. Also, they are usually embedded
in other structures (ie "struct inode" is embedded into "struct
ext4_inode_info"), and unless we force alignment (and thus possibly
lots of padding), the actual alignment of 'struct inode' will vary
depending on filesystem.

So I would suggest we *not* do cacheline alignment, because it will
result in random padding.

But it sounds like maybe the solution is to make sure that the
different fields of the inode can and should be packed differently?

So one thing to look at is to see what fields in 'struct inode' might
be best moved together, to minimize cache accesses.

And in particular, if this is *only* an issue of "struct
rw_semaphore", maybe we should look at the layout of *that*. In
particular, I'm getting the feeling that we should put the "owner"
field right next to the "count" field, because the normal
non-contended path only touches those two fields.

Right now those two fields are pretty far from each other in 'struct
rw_semaphore', which then makes the "oops they got allocated in
different cachelines" much more likely.

So even if 'struct inode' layout itself isn't changed, maybe just
optimizing the layout of 'struct rw_semaphore' a bit for the common
case might fix it all up.

Waiman, I didn't check if your rewrite already possibly fixes this?

                   Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ