[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241219171344.GA26279@noisy.programming.kicks-ass.net>
Date: Thu, 19 Dec 2024 18:13:44 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: André Almeida <andrealmeid@...lia.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>, Arnd Bergmann <arnd@...db.de>,
sonicadvance1@...il.com, linux-kernel@...r.kernel.org,
kernel-dev@...lia.com, linux-api@...r.kernel.org,
Nathan Chancellor <nathan@...nel.org>,
Vinicius Peixoto <vpeixoto@...amp.dev>, fweimer@...hat.com,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH v3 0/3] futex: Create set_robust_list2
On Thu, Dec 19, 2024 at 11:28:27AM -0300, André Almeida wrote:
> Em 17/12/2024 17:31, Peter Zijlstra escreveu:
> > On Tue, Dec 17, 2024 at 02:49:55PM -0300, André Almeida wrote:
> > > This patch adds a new robust_list() syscall. The current syscall
> > > can't be expanded to cover the following use case, so a new one is
> > > needed. This new syscall allows users to set multiple robust lists per
> > > process and to have either 32bit or 64bit pointers in the list.
> >
> > Last time a whole list of short comings of the current robust scheme
> > were laid bare. I feel we should address all that if we're going to
> > create a new scheme.
> >
>
> Are you talking about [1] or is there something else?
>
> [1] https://lore.kernel.org/lkml/87jzdjxjj8.fsf@oldenburg3.str.redhat.com/
Correct, that thread.
So at the very least I think we should enforce natural alignment of the
robust entry -- this ensures the whole object is always on a single
page. This should then allow emulators (like QEMU) to convert things
back to native address space.
Additionally, I think we can replace the LIST_LIMIT -- whoes purpose is
to mitigate the danger of loops -- with the kernel simply destroying the
list while it iterates it. That way it cannot be caught in loops, no
matter what userspace did.
That then leaves the whole munmap() race -- and I'm not really sure what
to do about that one. I did outline two option, but they're both quite
terrible.
The mmap()/munmap() code would need to serialize against list_op_pending
without incurring undue overhead in the common case.
Ideally we make the whole thing using RSEQ such that list_op_pending
becomes atomic vs preemption -- but I've not thought that through.
Powered by blists - more mailing lists