[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGudoHEFoM0BaA+CVfuVgR5MGE9mmsWhu-gMaE3-iRcwBvvcDQ@mail.gmail.com>
Date: Thu, 30 Jan 2025 12:01:05 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: brauner@...nel.org, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, "Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [PATCH v2] exit: perform randomness and pid work without tasklist_lock
On Tue, Jan 28, 2025 at 8:22 PM Oleg Nesterov <oleg@...hat.com> wrote:
>
> On 01/28, Mateusz Guzik wrote:
> >
> > On Tue, Jan 28, 2025 at 7:30 PM Oleg Nesterov <oleg@...hat.com> wrote:
> > >
> > no problem, will send a v3 provided there are no issues reported
> > concerning the pid stuff
>
> Great, thanks.
>
> BTW, I didn't look at the pid stuff yet, I _feel_ that this can be simplified
> too, but I am already sleeping, most probably I am wrong.
>
I looked at pid code apart from the issue at hand.
It the lock protecting it uses irq disablement to guard against
tasklist_lock users coming from an interrupt.
AFAICS this can be legally arranged so that the pidmap_lock is *never*
taken while tasklist_lock is held.
so far the problematic ordering only stems from free_pid calls (not
only on exit), which can all be moved out.
this will reduce total tasklist_lock hold time *and* whack the irq
trip, speeding this up single-threaded
I'll hack it up when I get around to it, maybe this week.
btw, with the current patch when rolling with highly parallel thread
creation/destruction it is pidmap_lock which is the main bottleneck
instead of tasklist_lock
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists