lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 22 Sep 2021 16:23:41 -0700 (PDT) From: Hugh Dickins <hughd@...gle.com> To: Eric Dumazet <eric.dumazet@...il.com> cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>, Eric Dumazet <edumazet@...gle.com>, Hugh Dickins <hughd@...gle.com> Subject: Re: [PATCH] mm: do not acquire zone lock in is_free_buddy_page() On Wed, 22 Sep 2021, Eric Dumazet wrote: > From: Eric Dumazet <edumazet@...gle.com> > > Grabbing zone lock in is_free_buddy_page() gives a wrong sense of safety, > and has potential performance implications when zone is experiencing > lock contention. > > In any case, if a caller needs a stable result, it should grab zone > lock before calling this function. > > Signed-off-by: Eric Dumazet <edumazet@...gle.com> > Cc: Hugh Dickins <hughd@...gle.com> Yes indeed, and you have already explained it well above: thanks. Acked-by: Hugh Dickins <hughd@...gle.com> > --- > mm/page_alloc.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index e115e21524739341d409b28379942241ed403060..cd8a72372b047e55c4cde80fe6b7a428d7721027 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -9354,21 +9354,21 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) > } > #endif > > +/* > + * This function returns a stable result only if called under zone lock. > + */ > bool is_free_buddy_page(struct page *page) > { > - struct zone *zone = page_zone(page); > unsigned long pfn = page_to_pfn(page); > - unsigned long flags; > unsigned int order; > > - spin_lock_irqsave(&zone->lock, flags); > for (order = 0; order < MAX_ORDER; order++) { > struct page *page_head = page - (pfn & ((1 << order) - 1)); > > - if (PageBuddy(page_head) && buddy_order(page_head) >= order) > + if (PageBuddy(page_head) && > + buddy_order_unsafe(page_head) >= order) > break; > } > - spin_unlock_irqrestore(&zone->lock, flags); > > return order < MAX_ORDER; > } > -- > 2.33.0.464.g1972c5931b-goog
Powered by blists - more mailing lists