[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14a04071-4650-4e81-b8b5-ab4dd330fe73@suse.cz>
Date: Fri, 21 Feb 2025 16:20:23 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org
Cc: peterz@...radead.org, willy@...radead.org, liam.howlett@...cle.com,
lorenzo.stoakes@...cle.com, david.laight.linux@...il.com, mhocko@...e.com,
hannes@...xchg.org, mjguzik@...il.com, oliver.sang@...el.com,
mgorman@...hsingularity.net, david@...hat.com, peterx@...hat.com,
oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org, brauner@...nel.org,
dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com,
lokeshgidra@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com,
klarasmodin@...il.com, richard.weiyang@...il.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v10 13/18] mm: move lesser used vma_area_struct members
into the last cacheline
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> Move several vma_area_struct members which are rarely or never used
> during page fault handling into the last cacheline to better pack
> vm_area_struct. As a result vm_area_struct will fit into 3 as opposed
> to 4 cachelines. New typical vm_area_struct layout:
>
> struct vm_area_struct {
> union {
> struct {
> long unsigned int vm_start; /* 0 8 */
> long unsigned int vm_end; /* 8 8 */
> }; /* 0 16 */
> freeptr_t vm_freeptr; /* 0 8 */
> }; /* 0 16 */
> struct mm_struct * vm_mm; /* 16 8 */
> pgprot_t vm_page_prot; /* 24 8 */
> union {
> const vm_flags_t vm_flags; /* 32 8 */
> vm_flags_t __vm_flags; /* 32 8 */
> }; /* 32 8 */
> unsigned int vm_lock_seq; /* 40 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head anon_vma_chain; /* 48 16 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> struct anon_vma * anon_vma; /* 64 8 */
> const struct vm_operations_struct * vm_ops; /* 72 8 */
> long unsigned int vm_pgoff; /* 80 8 */
> struct file * vm_file; /* 88 8 */
> void * vm_private_data; /* 96 8 */
> atomic_long_t swap_readahead_info; /* 104 8 */
> struct mempolicy * vm_policy; /* 112 8 */
> struct vma_numab_state * numab_state; /* 120 8 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct {
> struct rb_node rb (__aligned__(8)); /* 136 24 */
> long unsigned int rb_subtree_last; /* 160 8 */
> } __attribute__((__aligned__(8))) shared; /* 136 32 */
> struct anon_vma_name * anon_name; /* 168 8 */
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */
>
> /* size: 192, cachelines: 3, members: 18 */
> /* sum members: 176, holes: 2, sum holes: 8 */
> /* padding: 8 */
> /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> } __attribute__((__aligned__(64)));
>
> Memory consumption per 1000 VMAs becomes 48 pages:
>
> slabinfo after vm_area_struct changes:
> <name> ... <objsize> <objperslab> <pagesperslab> : ...
> vm_area_struct ... 192 42 2 : ...
>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Powered by blists - more mailing lists