lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Jun 2021 09:11:03 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Jason Gunthorpe <jgg@...dia.com>, Peter Xu <peterx@...hat.com>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        kernel test robot <oliver.sang@...el.com>,
        John Hubbard <jhubbard@...dia.com>,
        linux-kernel@...r.kernel.org, lkp@...ts.01.org,
        Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH] mm: relocate 'write_protect_seq' in struct mm_struct

On Mon, Jun 14, 2021 at 11:27:39AM +0800, Feng Tang wrote:
> > 
> > It seems Ok to me, but didn't we earlier add the has_pinned which
> > would have changed the layout too? Are we chasing performance delta's
> > nobody cares about?
> 
> Good point! I checked my email folder for 0day's reports, and haven't
> found a report related with Peter's commit 008cfe4418b3 ("mm: Introduce
> mm_struct.has_pinned) which adds 'has_pinned' field. 
> 
> Will run the same test for it and report back.

I run the same will-it-scale/mmap1 case for Peter's commit 008cfe4418b3
and its parent commit, and there is no obvious performance diff:

a1bffa48745afbb5 008cfe4418b3dbda2ff820cdd7b 
---------------- --------------------------- 

    344353            -0.4%     342929        will-it-scale.48.threads
      7173            -0.4%       7144        will-it-scale.per_thread_ops

And from the pahole info for the 2 kernels, Peter's commit adds the
'has_pinned' is put into an existing 4 bytes hole, so all other following 
fields keep their alignment unchanged. Peter may do it purposely 
considering the alignment. So no performance change is expected.

Pahole info for kernel before 008cfe4418b3:

struct mm_struct {
	...
	/* --- cacheline 1 boundary (64 bytes) --- */
	long unsigned int  task_size;            /*    64     8 */
	long unsigned int  highest_vm_end;       /*    72     8 */
	pgd_t *            pgd;                  /*    80     8 */
	atomic_t           membarrier_state;     /*    88     4 */
	atomic_t           mm_users;             /*    92     4 */
	atomic_t           mm_count;             /*    96     4 */

	/* XXX 4 bytes hole, try to pack */

	atomic_long_t      pgtables_bytes;       /*   104     8 */
	int                map_count;            /*   112     4 */
	spinlock_t         page_table_lock;      /*   116     4 */
	struct rw_semaphore mmap_lock;           /*   120    40 */
	/* --- cacheline 2 boundary (128 bytes) was 32 bytes ago --- */

pahold info with 008cfe4418b3:

struct mm_struct {
	...
	/* --- cacheline 1 boundary (64 bytes) --- */
	long unsigned int  task_size;            /*    64     8 */
	long unsigned int  highest_vm_end;       /*    72     8 */
	pgd_t *            pgd;                  /*    80     8 */
	atomic_t           membarrier_state;     /*    88     4 */
	atomic_t           mm_users;             /*    92     4 */
	atomic_t           mm_count;             /*    96     4 */
	atomic_t           has_pinned;           /*   100     4 */
	atomic_long_t      pgtables_bytes;       /*   104     8 */
	int                map_count;            /*   112     4 */
	spinlock_t         page_table_lock;      /*   116     4 */
	struct rw_semaphore mmap_lock;           /*   120    40 */
	/* --- cacheline 2 boundary (128 bytes) was 32 bytes ago --- */
	
Thanks,
Feng

 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ