lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1c2b999d-4924-25c5-cb8a-2be951c8c2a9@loongson.cn>
Date:   Thu, 17 Mar 2022 10:21:54 +0800
From:   wangjianxing <wangjianxing@...ngson.cn>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     will@...nel.org, aneesh.kumar@...ux.ibm.com,
        akpm@...ux-foundation.org, npiggin@...il.com,
        linux-arch@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/mmu_gather: limit tlb batch count and add schedule
 point in tlb_batch_pages_flush

On 03/16/2022 04:57 PM, Peter Zijlstra wrote:
> This seems like a really complicated way of writing something like the
> below...
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index afb7185ffdc4..b382e86c1b47 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -47,8 +47,17 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
>   	struct mmu_gather_batch *batch;
>   
>   	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
> -		free_pages_and_swap_cache(batch->pages, batch->nr);
> -		batch->nr = 0;
> +		struct page_struct *pages = batch->pages;
> +
> +		do {
> +			int nr = min(512, batch->nr);
> +
> +			free_pages_and_swap_cache(pages, nr);
> +			pages += nr;
> +			batch->nr -= nr;
> +
> +			cond_resched();
> +		} while (batch->nr);
>   	}
>   	tlb->active = &tlb->local;
>   }
Yeah, it looks nicer.

I will resubmit the patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ