lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AB813F3.8060102@kernel.org>
Date:	Tue, 22 Sep 2009 09:01:55 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Mel Gorman <mel@....ul.ie>
CC:	Nick Piggin <npiggin@...e.de>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Christoph Lameter <cl@...ux-foundation.org>,
	heiko.carstens@...ibm.com, sachinp@...ibm.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 1/3] powerpc: Allocate per-cpu areas for node IDs for
 SLQB to use as per-node areas

Hello,

Mel Gorman wrote:
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 1f68160..a5f52d4 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -588,6 +588,26 @@ void __init setup_per_cpu_areas(void)
>  		paca[i].data_offset = ptr - __per_cpu_start;
>  		memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
>  	}
> +#ifdef CONFIG_SLQB
> +	/* 
> +	 * SLQB abuses DEFINE_PER_CPU to setup a per-node area. This trick
> +	 * assumes that ever node ID will have a CPU of that ID to match.
> +	 * On systems with memoryless nodes, this may not hold true. Hence,
> +	 * we take a second pass initialising a "per-cpu" area for node-ids
> +	 * that SLQB can use
> +	 */
> +	for_each_node_state(i, N_NORMAL_MEMORY) {
> +
> +		/* Skip node IDs that a valid CPU id exists for */
> +		if (paca[i].data_offset)
> +			continue;
> +
> +		ptr = alloc_bootmem_pages_node(NODE_DATA(cpu_to_node(i)), size);
> +
> +		paca[i].data_offset = ptr - __per_cpu_start;
> +		memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
> +	}
> +#endif /* CONFIG_SLQB */
>  }
>  #endif

Eh... I don't know.  This seems too hacky to me.  Why not just
allocate pointer array of MAX_NUMNODES and allocate per-node memory
there?  This will be slightly more expensive but I doubt it will be
noticeable.  The only extra overhead is the cachline footprint for the
extra array.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ