[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230820130004.knx42tyeshps4vdg@f>
Date: Sun, 20 Aug 2023 15:00:04 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Matthew Wilcox <willy@...radead.org>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: remove unintentional voluntary preemption in
get_mmap_lock_carefully
On Sun, Aug 20, 2023 at 02:47:41PM +0200, Linus Torvalds wrote:
> On Sun, 20 Aug 2023 at 14:41, Mateusz Guzik <mjguzik@...il.com> wrote:
> > My first patch looked like this:
>
> Well, that's disgusting and strange.
>
> > - might_sleep();
> > +#if defined(CONFIG_DEBUG_ATOMIC_SLEEP)
> > + __might_sleep(__FILE__, __LINE__);
> > +#endif
>
> Why would you have that strange #ifdef? __might_sleep() just goes away
> without that debug option anyway.
>
> But without that odd ifdef, I think it's fine.
>
Heh, I wrote the patch last night and I could swear it failed to compile
without the ifdef.
That said I think it looks more than disgusting and I'm happy to confirm
it does build both ways.
That said:
mm: remove unintentional voluntary preemption in get_mmap_lock_carefully
Should the trylock succeed (and thus blocking was avoided), the routine
wants to ensure blocking was still legal to do. However, might_sleep()
ends up calling __cond_resched() injecting a voluntary preemption point
with the freshly acquired lock.
__might_sleep() instead to only get the asserts.
Found while checking off-CPU time during kernel build (like so:
"offcputime-bpfcc -Ku"), sample backtrace:
finish_task_switch.isra.0
__schedule
__cond_resched
lock_mm_and_find_vma
do_user_addr_fault
exc_page_fault
asm_exc_page_fault
- sh (4502)
10
Signed-off-by: Mateusz Guzik <mjguzik@...il.com>
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 1ec1ef3418bf..d82316a8a48b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5259,7 +5259,7 @@ static inline bool get_mmap_lock_carefully(struct mm_struct *mm, struct pt_regs
{
/* Even if this succeeds, make it clear we *might* have slept */
if (likely(mmap_read_trylock(mm))) {
- might_sleep();
+ __might_sleep(__FILE__, __LINE__);
return true;
}
--
2.39.2
Powered by blists - more mailing lists