[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240327143548.744070-1-david@redhat.com>
Date: Wed, 27 Mar 2024 15:35:48 +0100
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>
Subject: [PATCH v1] mm: optimize CONFIG_PER_VMA_LOCK member placement in vm_area_struct
Currently, we end up wasting some memory in each vm_area_struct. Pahole
states that:
[...]
int vm_lock_seq; /* 40 4 */
/* XXX 4 bytes hole, try to pack */
struct vma_lock * vm_lock; /* 48 8 */
bool detached; /* 56 1 */
/* XXX 7 bytes hole, try to pack */
[...]
Let's reduce the holes and memory wastage by moving the bool:
[...]
bool detached; /* 40 1 */
/* XXX 3 bytes hole, try to pack */
int vm_lock_seq; /* 44 4 */
struct vma_lock * vm_lock; /* 48 8 */
[...]
Effectively shrinking the vm_area_struct with CONFIG_PER_VMA_LOCK by
8 byte.
Likely, we could place "detached" in the lowest bit of vm_lock, but at
least on 64bit that won't really make a difference, so keep it simple.
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Signed-off-by: David Hildenbrand <david@...hat.com>
---
include/linux/mm_types.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 4ae4684d1add..f56739dece7a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -671,6 +671,9 @@ struct vm_area_struct {
};
#ifdef CONFIG_PER_VMA_LOCK
+ /* Flag to indicate areas detached from the mm->mm_mt tree */
+ bool detached;
+
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
@@ -687,9 +690,6 @@ struct vm_area_struct {
*/
int vm_lock_seq;
struct vma_lock *vm_lock;
-
- /* Flag to indicate areas detached from the mm->mm_mt tree */
- bool detached;
#endif
/*
--
2.43.2
Powered by blists - more mailing lists