lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4554466F.8010602@openvz.org>
Date:	Fri, 10 Nov 2006 12:29:19 +0300
From:	Pavel Emelianov <xemul@...nvz.org>
To:	balbir@...ibm.com
CC:	Pavel Emelianov <xemul@...nvz.org>, Linux MM <linux-mm@...ck.org>,
	dev@...nvz.org, ckrm-tech@...ts.sourceforge.net,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	haveblue@...ibm.com, rohitseth@...gle.com
Subject: Re: [RFC][PATCH 8/8] RSS controller support reclamation

Balbir Singh wrote:

[snip]

>> And what about a hard limit - how would you fail in page fault in
>> case of limit hit? SIGKILL/SEGV is not an option - in this case we
>> should run synchronous reclamation. This is done in beancounter
>> patches v6 we've sent recently.
>>
> 
> I thought about running synchronous reclamation, but then did not follow
> that approach, I was not sure if calling the reclaim routines from the
> page fault context is a good thing to do. It's worth trying out, since

Each page fault potentially calls reclamation by allocating
required page with __GFP_IO | __GFP_FS bits set. Synchronous
reclamation in page fault is really normal.

[snip]

>> Please correct me if I'm wrong, but does this reclamation work like
>> "run over all the zones' lists searching for page whose controller
>> is sc->container" ?
>>
> 
> Yeah, that's correct. The code can also reclaim memory from all over-the-limit

OK. What if I have a container with 100 pages limit in a 4Gb
(~ million of pages) machine and this group starts reclaiming
its pages. In case this group uses its pages heavily they will
be at the beginning of an LRU list and reclamation code would
have to scan through all (million) pages before it finds proper
ones. This is not optimal!

> containers (by passing SC_OVERLIMIT_ALL). The idea behind using such a scheme
> is to ensure that the global LRU list is not broken.

isolate_lru_pages() helps in this. As far as I remember this
was introduced to reduce lru lock contention and keep lru
lists integrity.

In beancounters patches this is used to shrink BC's pages.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ