lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 26 Sep 2013 22:30:49 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	Arjan van de Ven <arjan@...ux.intel.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Andi Kleen <andi@...stfloor.org>, mgorman@...e.de,
	dave@...1.net, hannes@...xchg.org, tony.luck@...el.com,
	matthew.garrett@...ula.com, riel@...hat.com,
	srinivas.pandruvada@...ux.intel.com, willy@...ux.intel.com,
	kamezawa.hiroyu@...fujitsu.com, lenb@...nel.org, rjw@...k.pl,
	gargankita@...il.com, paulmck@...ux.vnet.ibm.com,
	svaidy@...ux.vnet.ibm.com, isimatu.yasuaki@...fujitsu.com,
	santosh.shilimkar@...com, kosaki.motohiro@...il.com,
	linux-pm@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, maxime.coquelin@...ricsson.com,
	loic.pallardy@...ricsson.com, thomas.abraham@...aro.org,
	amit.kachhap@...aro.org
Subject: Re: [Results] [RFC PATCH v4 00/40] mm: Memory Power Management

On 09/26/2013 09:28 PM, Arjan van de Ven wrote:
> On 9/26/2013 6:42 AM, Srivatsa S. Bhat wrote:
>> On 09/26/2013 08:29 AM, Andrew Morton wrote:
>>> On Thu, 26 Sep 2013 03:50:16 +0200 Andi Kleen <andi@...stfloor.org>
>>> wrote:
>>>
>>>> On Wed, Sep 25, 2013 at 06:21:29PM -0700, Andrew Morton wrote:
>>>>> On Wed, 25 Sep 2013 18:15:21 -0700 Arjan van de Ven
>>>>> <arjan@...ux.intel.com> wrote:
>>>>>
>>>>>> On 9/25/2013 4:47 PM, Andi Kleen wrote:
>>>>>>>> Also, the changelogs don't appear to discuss one obvious
>>>>>>>> downside: the
>>>>>>>> latency incurred in bringing a bank out of one of the low-power
>>>>>>>> states
>>>>>>>> and back into full operation.  Please do discuss and quantify
>>>>>>>> that to
>>>>>>>> the best of your knowledge.
>>>>>>>
>>>>>>> On Sandy Bridge the memry wakeup overhead is really small. It's
>>>>>>> on by default
>>>>>>> in most setups today.
>>>>>>
>>>>>> btw note that those kind of memory power savings are
>>>>>> content-preserving,
>>>>>> so likely a whole chunk of these patches is not actually needed on
>>>>>> SNB
>>>>>> (or anything else Intel sells or sold)
>>>>>
>>>>> (head spinning a bit).  Could you please expand on this rather a lot?
>>>>
>>>> As far as I understand there is a range of aggressiveness. You could
>>>> just group memory a bit better (assuming you can sufficiently predict
>>>> the future or have some interface to let someone tell you about it).
>>>>
>>>> Or you can actually move memory around later to get as low footprint
>>>> as possible.
>>>>
>>>> This patchkit seems to do both, with the later parts being on the
>>>> aggressive side (move things around)
>>>>
>>>> If you had non content preserving memory saving you would
>>>> need to be aggressive as you couldn't afford any mistakes.
>>>>
>>>> If you had very slow wakeup you also couldn't afford mistakes,
>>>> as those could cost a lot of time.
>>>>
>>>> On SandyBridge is not slow and it's preserving, so some mistakes are
>>>> ok.
>>>>
>>>> But being aggressive (so move things around) may still help you saving
>>>> more power -- i guess only benchmarks can tell. It's a trade off
>>>> between
>>>> potential gain and potential worse case performance regression.
>>>> It may also depend on the workload.
>>>>
>>>> At least right now the numbers seem to be positive.
>>>
>>> OK.  But why are "a whole chunk of these patches not actually needed
>>> on SNB
>>> (or anything else Intel sells or sold)"?  What's the difference between
>>> Intel products and whatever-it-is-this-patchset-was-designed-for?
>>>
>>
>> Arjan, are you referring to the fact that Intel/SNB systems can exploit
>> memory self-refresh only when the entire system goes idle? Is that why
>> this
>> patchset won't turn out to be that useful on those platforms?
> 
> no we can use other things (CKE and co) all the time.
> 

Ah, ok..

> just that we found that statistical grouping gave 95%+ of the benefit,
> without the cost of being aggressive on going to a 100.00% grouping
> 

And how do you do that statistical grouping? Don't you need patches similar
to those in this patchset? Or are you saying that the existing vanilla
kernel itself does statistical grouping somehow?

Also, I didn't fully understand how NUMA policy will help in this case..
If you want to group memory allocations/references into fewer memory regions
_within_ a node, will NUMA policy really help? For example, in this patchset,
everything (all the allocation/reference shaping) is done _within_ the
NUMA boundary, assuming that the memory regions are subsets of a NUMA node.

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ