lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Jan 2013 11:55:10 +0400
From:	Glauber Costa <glommer@...allels.com>
To:	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
CC:	Tang Chen <tangchen@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	<rientjes@...gle.com>, <len.brown@...el.com>,
	<benh@...nel.crashing.org>, <paulus@...ba.org>, <cl@...ux.com>,
	<minchan.kim@...il.com>, <kosaki.motohiro@...fujitsu.com>,
	<isimatu.yasuaki@...fujitsu.com>, <wujianguo@...wei.com>,
	<wency@...fujitsu.com>, <hpa@...or.com>, <linfeng@...fujitsu.com>,
	<laijs@...fujitsu.com>, <mgorman@...e.de>, <yinghai@...nel.org>,
	<x86@...nel.org>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>, <linuxppc-dev@...ts.ozlabs.org>,
	<linux-acpi@...r.kernel.org>, <linux-s390@...r.kernel.org>,
	<linux-sh@...r.kernel.org>, <linux-ia64@...r.kernel.org>,
	<cmetcalf@...era.com>, <sparclinux@...r.kernel.org>
Subject: Re: [PATCH v6 00/15] memory-hotplug: hot-remove physical memory

On 01/10/2013 11:31 AM, Kamezawa Hiroyuki wrote:
> (2013/01/10 16:14), Glauber Costa wrote:
>> On 01/10/2013 06:17 AM, Tang Chen wrote:
>>>>> Note: if the memory provided by the memory device is used by the
>>>>> kernel, it
>>>>> can't be offlined. It is not a bug.
>>>>
>>>> Right.  But how often does this happen in testing?  In other words,
>>>> please provide an overall description of how well memory hot-remove is
>>>> presently operating.  Is it reliable?  What is the success rate in
>>>> real-world situations?
>>>
>>> We test the hot-remove functionality mostly with movable_online used.
>>> And the memory used by kernel is not allowed to be removed.
>>
>> Can you try doing this using cpusets configured to hardwall ?
>> It is my understanding that the object allocators will try hard not to
>> allocate anything outside the walls defined by cpuset. Which means that
>> if you have one process per node, and they are hardwalled, your kernel
>> memory will be spread evenly among the machine. With a big enough load,
>> they should eventually be present in all blocks.
>>
> 
> I'm sorry I couldn't catch your point.
> Do you want to confirm whether cpuset can work enough instead of
> ZONE_MOVABLE ?
> Or Do you want to confirm whether ZONE_MOVABLE will not work if it's
> used with cpuset ?
> 
> 
No, I am not proposing to use cpuset do tackle the problem. I am just
wondering if you would still have high success rates with cpusets in use
with hardwalls. This is just one example of a workload that would spread
kernel memory around quite heavily.

So this is just me trying to understand the limitations of the mechanism.

>> Another question I have for you: Have you considering calling
>> shrink_slab to try to deplete the caches and therefore free at least
>> slab memory in the nodes that can't be offlined? Is it relevant?
>>
> 
> At this stage, we don't consider to call shrink_slab(). We require
> nearly 100% success at offlining memory for removing DIMM.
> It's my understanding.
> 
Of course, this is indisputable.

> IMHO, I don't think shrink_slab() can kill all objects in a node even
> if they are some caches. We need more study for doing that.
> 

Indeed, shrink_slab can only kill cached objects. They, however, are
usually a very big part of kernel memory. I wonder though if in case of
failure, it is worth it to try at least one shrink pass before you give up.

It is not very different from what is in memory-failure.c, except that
we could do better and do a more targetted shrinking (support for that
is being worked on)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ