lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNu5uHhYI4QxR4au@google.com>
Date:   Tue, 15 Aug 2023 10:45:28 -0700
From:   Chris Li <chrisl@...nel.org>
To:     Kemeng Shi <shikemeng@...weicloud.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, baolin.wang@...ux.alibaba.com,
        mgorman@...hsingularity.net, david@...hat.com, willy@...radead.org
Subject: Re: [PATCH 1/2] mm/page_alloc: remove track of active PCP lists
 range in bulk free

Hi Kemeng,

Can you confirm this patch has no intended functional change?

I have a patch sitting in my tree for a while related to this
count vs pcp->count.  The BPF function hook can potentially change
pcp->count and make count out of sync with pcp->count which causes
a dead loop.

Maybe I can send my out alone side with yours for discussion?
I don't mind my patch combined with yours.

Your change looks fine to me. There is more can be done
on the clean up.

Chris

On Wed, Aug 09, 2023 at 06:07:53PM +0800, Kemeng Shi wrote:
> After commit fd56eef258a17 ("mm/page_alloc: simplify how many pages are
> selected per pcp list during bulk free"), we will drain all pages in
> selected pcp list. And we ensured passed count is < pcp->count. Then,
> the search will finish before wrap-around and track of active PCP lists
> range intended for wrap-around case is no longer needed.

> 
> Signed-off-by: Kemeng Shi <shikemeng@...weicloud.com>
> ---
>  mm/page_alloc.c | 15 +++------------
>  1 file changed, 3 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 96b7c1a7d1f2..1ddcb2707d05 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1207,8 +1207,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  					int pindex)
>  {
>  	unsigned long flags;
> -	int min_pindex = 0;
> -	int max_pindex = NR_PCP_LISTS - 1;
>  	unsigned int order;
>  	bool isolated_pageblocks;
>  	struct page *page;
> @@ -1231,17 +1229,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  
>  		/* Remove pages from lists in a round-robin fashion. */
>  		do {
> -			if (++pindex > max_pindex)
> -				pindex = min_pindex;
> +			if (++pindex > NR_PCP_LISTS - 1)
> +				pindex = 0;
>  			list = &pcp->lists[pindex];
> -			if (!list_empty(list))
> -				break;
> -
> -			if (pindex == max_pindex)
> -				max_pindex--;
> -			if (pindex == min_pindex)
> -				min_pindex++;
> -		} while (1);
> +		} while (list_empty(list));
>  
>  		order = pindex_to_order(pindex);
>  		nr_pages = 1 << order;
> -- 
> 2.30.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ