lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Jun 2010 16:01:58 +0300
From:	Avi Kivity <avi@...hat.com>
To:	balbir@...ux.vnet.ibm.com
CC:	Dave Hansen <dave@...ux.vnet.ibm.com>, kvm <kvm@...r.kernel.org>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC/T/D][PATCH 2/2] Linux/Guest cooperative unmapped page cache
 control

On 06/14/2010 03:50 PM, Balbir Singh wrote:
>
>>
>>> let me try to reason a bit. First let me explain the
>>> problem
>>>
>>> Memory is a precious resource in a consolidated environment.
>>> We don't want to waste memory via page cache duplication
>>> (cache=writethrough and cache=writeback mode).
>>>
>>> Now here is what we are trying to do
>>>
>>> 1. A slab page will not be freed until the entire page is free (all
>>> slabs have been kfree'd so to speak). Normal reclaim will definitely
>>> free this page, but a lot of it depends on how frequently we are
>>> scanning the LRU list and when this page got added.
>>> 2. In the case of page cache (specifically unmapped page cache), there
>>> is duplication already, so why not go after unmapped page caches when
>>> the system is under memory pressure?
>>>
>>> In the case of 1, we don't force a dentry to be freed, but rather a
>>> freed page in the slab cache to be reclaimed ahead of forcing reclaim
>>> of mapped pages.
>>>        
>> Sounds like this should be done unconditionally, then.  An empty
>> slab page is worth less than an unmapped pagecache page at all
>> times, no?
>>
>>      
> In a consolidated environment, even at the cost of some CPU to run
> shrinkers, I think potentially yes.
>    

I don't understand.  If you're running the shrinkers then you're 
evicting live entries, which could cost you an I/O each.  That's 
expensive, consolidated or not.

If you're not running the shrinkers, why does it matter if you're 
consolidated or not?  Drop that age unconditionally.

>>> Does the problem statement make sense? If so, do you agree with 1 and
>>> 2? Is there major concern about subverting regular reclaim? Does
>>> subverting it make sense in the duplicated scenario?
>>>
>>>        
>> In the case of 2, how do you know there is duplication?  You know
>> the guest caches the page, but you have no information about the
>> host.  Since the page is cached in the guest, the host doesn't see
>> it referenced, and is likely to drop it.
>>      
> True, that is why the first patch is controlled via a boot parameter
> that the host can pass. For the second patch, I think we'll need
> something like a balloon<size>  <cache?>  with the cache argument being
> optional.
>    

Whether a page is duplicated on the host or not is per-page, it cannot 
be a boot parameter.

If we drop unmapped pagecache pages, we need to be sure they can be 
backed by the host, and that depends on the amount of sharing.

Overall, I don't see how a user can tune this.  If I were a guest admin, 
I'd play it safe by not assuming the host will back me, and disabling 
the feature.

To get something like this to work, we need to reward cooperating guests 
somehow.

>> If there is no duplication, then you may have dropped a
>> recently-used page and will likely cause a major fault soon.
>>      
> Yes, agreed.
>    

So how do we deal with this?



-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ