[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFpPvBLgZNxwHuT-kLsvBABWyK9H6tFCmsTCtVpOxET6Q@mail.gmail.com>
Date: Wed, 16 Oct 2024 19:01:59 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrii Nakryiko <andrii@...nel.org>, linux-trace-kernel@...r.kernel.org,
linux-mm@...ck.org, peterz@...radead.org, oleg@...hat.com,
rostedt@...dmis.org, mhiramat@...nel.org, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, jolsa@...nel.org, paulmck@...nel.org,
willy@...radead.org, akpm@...ux-foundation.org, mjguzik@...il.com,
brauner@...nel.org, jannh@...gle.com, mhocko@...nel.org, vbabka@...e.cz,
hannes@...xchg.org, Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com
Subject: Re: [PATCH v3 tip/perf/core 2/4] mm: switch to 64-bit
mm_lock_seq/vm_lock_seq on 64-bit architectures
On Sun, Oct 13, 2024 at 12:56 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Thu, Oct 10, 2024 at 01:56:42PM GMT, Andrii Nakryiko wrote:
> > To increase mm->mm_lock_seq robustness, switch it from int to long, so
> > that it's a 64-bit counter on 64-bit systems and we can stop worrying
> > about it wrapping around in just ~4 billion iterations. Same goes for
> > VMA's matching vm_lock_seq, which is derived from mm_lock_seq.
vm_lock_seq does not need to be long but for consistency I guess that
makes sense. While at it, can you please change these seq counters to
be unsigned?
Also, did you check with pahole if the vm_area_struct layout change
pushes some members into a difference cacheline or creates new gaps?
> >
> > I didn't use __u64 outright to keep 32-bit architectures unaffected, but
> > if it seems important enough, I have nothing against using __u64.
> >
> > Suggested-by: Jann Horn <jannh@...gle.com>
> > Signed-off-by: Andrii Nakryiko <andrii@...nel.org>
>
> Reviewed-by: Shakeel Butt <shakeel.butt@...ux.dev>
Powered by blists - more mailing lists