lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Jun 2021 14:52:49 -0400
From:   Peter Xu <peterx@...hat.com>
To:     Feng Tang <feng.tang@...el.com>
Cc:     Jason Gunthorpe <jgg@...dia.com>, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        kernel test robot <oliver.sang@...el.com>,
        John Hubbard <jhubbard@...dia.com>,
        linux-kernel@...r.kernel.org, lkp@...ts.01.org
Subject: Re: [PATCH] mm: relocate 'write_protect_seq' in struct mm_struct

On Tue, Jun 15, 2021 at 09:11:03AM +0800, Feng Tang wrote:
> On Mon, Jun 14, 2021 at 11:27:39AM +0800, Feng Tang wrote:
> > > 
> > > It seems Ok to me, but didn't we earlier add the has_pinned which
> > > would have changed the layout too? Are we chasing performance delta's
> > > nobody cares about?
> > 
> > Good point! I checked my email folder for 0day's reports, and haven't
> > found a report related with Peter's commit 008cfe4418b3 ("mm: Introduce
> > mm_struct.has_pinned) which adds 'has_pinned' field. 
> > 
> > Will run the same test for it and report back.
> 
> I run the same will-it-scale/mmap1 case for Peter's commit 008cfe4418b3
> and its parent commit, and there is no obvious performance diff:
> 
> a1bffa48745afbb5 008cfe4418b3dbda2ff820cdd7b 
> ---------------- --------------------------- 
> 
>     344353            -0.4%     342929        will-it-scale.48.threads
>       7173            -0.4%       7144        will-it-scale.per_thread_ops
> 
> And from the pahole info for the 2 kernels, Peter's commit adds the
> 'has_pinned' is put into an existing 4 bytes hole, so all other following 
> fields keep their alignment unchanged. Peter may do it purposely 
> considering the alignment. So no performance change is expected.

Thanks for verifying this.  I didn't do it on purpose at least for the initial
version, but I do remember some comment to fill up that hole, so it may have
got moved around.

Also note that if nothing goes wrong, has_pinned will be gone in the next
release with commit 3c0c4cda6d48 ("mm: gup: pack has_pinned in MMF_HAS_PINNED",
2021-05-26); it's in -mm-next but not reaching the main branch yet.  So then I
think the 4 bytes hole should come back to us again, with no perf loss either.

What I'm thinking is whether we should move some important (and especially
CONFIG_* irrelevant) fields at the top of the whole struct define, make sure
they're most optimal for the common workload and make them static.  Then
there'll be less or no possibility some new field regress some common workload
by accident.  Not sure whether it makes sense to do so.

Thanks,

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ