[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOH62Zi/yao/oC8y@casper.infradead.org>
Date: Sun, 20 Aug 2023 12:36:57 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: remove unintentional voluntary preemption in
get_mmap_lock_carefully
On Sun, Aug 20, 2023 at 12:43:03PM +0200, Mateusz Guzik wrote:
> Should the trylock succeed (and thus blocking was avoided), the routine
> wants to ensure blocking was still legal to do. However, the method
> used ends up calling __cond_resched injecting a voluntary preemption
> point with the freshly acquired lock.
>
> One can hack around it using __might_sleep instead of mere might_sleep,
> but since threads keep going off CPU here, I figured it is better to
> accomodate it.
Except now we search the exception tables every time we call it.
The now-deleted comment (c2508ec5a58d) suggests this is slow:
- /*
- * Kernel-mode access to the user address space should only occur
- * on well-defined single instructions listed in the exception
- * tables. But, an erroneous kernel fault occurring outside one of
- * those areas which also holds mmap_lock might deadlock attempting
- * to validate the fault against the address space.
- *
- * Only do the expensive exception table search when we might be at
- * risk of a deadlock. This happens if we
- * 1. Failed to acquire mmap_lock, and
- * 2. The access did not originate in userspace.
- */
Now, this doesn't mean we're doing it on every page fault. We skip
all of this if we're able to handle the fault under the VMA lock,
so the effect is probably much smaller than it was. But I'm surprised
not to see you send any data quantifying the effect of this change!
Powered by blists - more mailing lists