lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 20 Jun 2010 11:11:06 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
CC:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC][PATCH 9/9] make kvm mmu shrinker more aggressive

On 06/18/2010 06:49 PM, Dave Hansen wrote:
> On Wed, 2010-06-16 at 08:25 -0700, Dave Hansen wrote:
>    
>> On Wed, 2010-06-16 at 12:24 +0300, Avi Kivity wrote:
>>      
>>> On 06/15/2010 04:55 PM, Dave Hansen wrote:
>>>        
>>>> In a previous patch, we removed the 'nr_to_scan' tracking.
>>>> It was not being used to track the number of objects
>>>> scanned, so we stopped using it entirely.  Here, we
>>>> strart using it again.
>>>>
>>>> The theory here is simple; if we already have the refcount
>>>> and the kvm->mmu_lock, then we should do as much work as
>>>> possible under the lock.  The downside is that we're less
>>>> fair about the KVM instances from which we reclaim.  Each
>>>> call to mmu_shrink() will tend to "pick on" one instance,
>>>> after which it gets moved to the end of the list and left
>>>> alone for a while.
>>>>
>>>>          
>>> That also increases the latency hit, as well as a potential fault storm,
>>> on that instance.  Spreading out is less efficient, but smoother.
>>>        
>> This is probably something that we need to go back and actually measure.
>> My suspicion is that, when memory fills up and this shrinker is getting
>> called a lot, it will be naturally fair.  That list gets shuffled around
>> enough, and mmu_shrink() called often enough that no VMs get picked on
>> too unfairly.
>>
>> I'll go back and see if I can quantify this a bit, though.
>>      
> The shrink _query_ (mmu_shrink() with nr_to_scan=0) code is called
> really, really often.  Like 5,000-10,000 times a second during lots of
> VM pressure.  But, it's almost never called on to actually shrink
> anything.
>
> Over the 20 minutes or so that I tested, I saw about 700k calls to
> mmu_shrink().  But, only 6 (yes, six) calls that had a non-zero
> nr_to_scan.  I'm not sure whether this is because of the .seeks argument
> to the shrinker or what, but the slab code stays far, far away from
> making mmu_shrink() do much real work.
>    

Certainly seems so from vmscan.c.

> That changes a few things.  I bet all the contention we were seeing was
> just from nr_to_scan=0 calls and not from actual shrink operations.
> Perhaps we should just stop this set after patch 4.
>    

At the very least, we should re-measure things.

Even afterwards, we might reduce .seeks in return for making the 
shrinker cleverer and eliminating the cap on mmu pages.  But I'm afraid 
the interface between vmscan and the shrinker is too simplistic; 
sometimes we can trim pages without much cost (unreferenced pages), but 
some pages are really critical for performance.  To see real 
improvement, we might need our own scanner.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists