lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEU-D_G3N1aduOprR0YmV+jP+4un78XMs4Qj41_V+_6Ug@mail.gmail.com>
Date: Wed, 15 Jan 2025 08:39:01 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: akpm@...ux-foundation.org, willy@...radead.org, liam.howlett@...cle.com, 
	lorenzo.stoakes@...cle.com, david.laight.linux@...il.com, mhocko@...e.com, 
	vbabka@...e.cz, hannes@...xchg.org, mjguzik@...il.com, oliver.sang@...el.com, 
	mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com, 
	oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org, 
	dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com, 
	lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com, 
	shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com, 
	klarasmodin@...il.com, richard.weiyang@...il.com, corbet@....net, 
	linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	kernel-team@...roid.com
Subject: Re: [PATCH v9 12/17] mm: move lesser used vma_area_struct members
 into the last cacheline

On Wed, Jan 15, 2025 at 2:51 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Jan 10, 2025 at 08:25:59PM -0800, Suren Baghdasaryan wrote:
> > Move several vma_area_struct members which are rarely or never used
> > during page fault handling into the last cacheline to better pack
> > vm_area_struct. As a result vm_area_struct will fit into 3 as opposed
> > to 4 cachelines. New typical vm_area_struct layout:
> >
> > struct vm_area_struct {
> >     union {
> >         struct {
> >             long unsigned int vm_start;              /*     0     8 */
> >             long unsigned int vm_end;                /*     8     8 */
> >         };                                           /*     0    16 */
> >         freeptr_t          vm_freeptr;               /*     0     8 */
> >     };                                               /*     0    16 */
> >     struct mm_struct *         vm_mm;                /*    16     8 */
> >     pgprot_t                   vm_page_prot;         /*    24     8 */
> >     union {
> >         const vm_flags_t   vm_flags;                 /*    32     8 */
> >         vm_flags_t         __vm_flags;               /*    32     8 */
> >     };                                               /*    32     8 */
> >     unsigned int               vm_lock_seq;          /*    40     4 */
>
> Does it not make sense to move this seq field near the refcnt?

In an earlier version, when vm_lock was not a refcount yet, I tried
that and moving vm_lock_seq introduced regression in the pft test. We
have that early vm_lock_seq check in the beginning of vma_start_read()
and if it fails we bail out early without locking. I think that might
be the reason why keeping vm_lock_seq in the first cacheling is
beneficial. But I'll try moving it again now that we have vm_refcnt
instead of the lock and see if pft still shows any regression.

>
> >     /* XXX 4 bytes hole, try to pack */
> >
> >     struct list_head           anon_vma_chain;       /*    48    16 */
> >     /* --- cacheline 1 boundary (64 bytes) --- */
> >     struct anon_vma *          anon_vma;             /*    64     8 */
> >     const struct vm_operations_struct  * vm_ops;     /*    72     8 */
> >     long unsigned int          vm_pgoff;             /*    80     8 */
> >     struct file *              vm_file;              /*    88     8 */
> >     void *                     vm_private_data;      /*    96     8 */
> >     atomic_long_t              swap_readahead_info;  /*   104     8 */
> >     struct mempolicy *         vm_policy;            /*   112     8 */
> >     struct vma_numab_state *   numab_state;          /*   120     8 */
> >     /* --- cacheline 2 boundary (128 bytes) --- */
> >     refcount_t          vm_refcnt (__aligned__(64)); /*   128     4 */
> >
> >     /* XXX 4 bytes hole, try to pack */
> >
> >     struct {
> >         struct rb_node     rb (__aligned__(8));      /*   136    24 */
> >         long unsigned int  rb_subtree_last;          /*   160     8 */
> >     } __attribute__((__aligned__(8))) shared;        /*   136    32 */
> >     struct anon_vma_name *     anon_name;            /*   168     8 */
> >     struct vm_userfaultfd_ctx  vm_userfaultfd_ctx;   /*   176     8 */
> >
> >     /* size: 192, cachelines: 3, members: 18 */
> >     /* sum members: 176, holes: 2, sum holes: 8 */
> >     /* padding: 8 */
> >     /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> > } __attribute__((__aligned__(64)));
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ