[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9305bdaf-9455-4c26-befb-471466f952ab@lucifer.local>
Date: Wed, 6 Aug 2025 20:07:24 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: "Adrian Huang (Lenovo)" <adrianhuang0701@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Liam.Howlett@...cle.com,
Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
Feng Tang <feng.79.tang@...il.com>, ahuang12@...ovo.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] mm: Correct misleading comment on mmap_lock field
in mm_struct
On Wed, Aug 06, 2025 at 10:59:06PM +0800, Adrian Huang (Lenovo) wrote:
> The comment previously described the offset of mmap_lock as 0x120 (hex),
> which is misleading. The correct offset is 56 bytes (decimal) from the
> last cache line boundary. Using '0x120' could confuse readers trying to
> understand why the count and owner fields reside in separate cachelines.
>
> This change also removes an unnecessary space for improved formatting.
>
> Signed-off-by: Adrian Huang (Lenovo) <adrianhuang0701@...il.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
> Changes in v2: Per Lorenzo's suggestion, use "56 bytes" instead of 120.
>
> include/linux/mm_types.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 1ec273b06691..c9c3d0307f8c 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1026,10 +1026,10 @@ struct mm_struct {
> * counters
> */
> /*
> - * With some kernel config, the current mmap_lock's offset
> - * inside 'mm_struct' is at 0x120, which is very optimal, as
> + * Typically the current mmap_lock's offset is 56 bytes from
> + * the last cacheline boundary, which is very optimal, as
> * its two hot fields 'count' and 'owner' sit in 2 different
> - * cachelines, and when mmap_lock is highly contended, both
> + * cachelines, and when mmap_lock is highly contended, both
> * of the 2 fields will be accessed frequently, current layout
> * will help to reduce cache bouncing.
> *
> --
> 2.34.1
>
Powered by blists - more mailing lists