lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 2 Mar 2019 13:48:20 +0000
From:   Peng Fan <peng.fan@....com>
To:     Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
        Christoph Lameter <cl@...ux.com>
CC:     Vlad Buslov <vladbu@...lanox.com>,
        "kernel-team@...com" <kernel-team@...com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 04/12] percpu: manage chunks based on contig_bits instead
 of free_bytes



> -----Original Message-----
> From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On
> Behalf Of Dennis Zhou
> Sent: 2019年2月28日 10:19
> To: Dennis Zhou <dennis@...nel.org>; Tejun Heo <tj@...nel.org>; Christoph
> Lameter <cl@...ux.com>
> Cc: Vlad Buslov <vladbu@...lanox.com>; kernel-team@...com;
> linux-mm@...ck.org; linux-kernel@...r.kernel.org
> Subject: [PATCH 04/12] percpu: manage chunks based on contig_bits instead
> of free_bytes
> 
> When a chunk becomes fragmented, it can end up having a large number of
> small allocation areas free. The free_bytes sorting of chunks leads to
> unnecessary checking of chunks that cannot satisfy the allocation.
> Switch to contig_bits sorting to prevent scanning chunks that may not be able
> to service the allocation request.
> 
> Signed-off-by: Dennis Zhou <dennis@...nel.org>
> ---
>  mm/percpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/percpu.c b/mm/percpu.c
> index b40112b2fc59..c996bcffbb2a 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -234,7 +234,7 @@ static int pcpu_chunk_slot(const struct pcpu_chunk
> *chunk)
>  	if (chunk->free_bytes < PCPU_MIN_ALLOC_SIZE || chunk->contig_bits
> == 0)
>  		return 0;
> 
> -	return pcpu_size_to_slot(chunk->free_bytes);
> +	return pcpu_size_to_slot(chunk->contig_bits * PCPU_MIN_ALLOC_SIZE);
>  }
> 
>  /* set the pointer to a chunk in a page struct */

Reviewed-by: Peng Fan <peng.fan@....com>

Not relevant to this patch, another optimization to percpu might be good
to use per chunk spin_lock, not gobal pcpu_lock.

Thanks,
Peng.

> --
> 2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ