lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <517028F1.6000002@sr71.net>
Date:	Thu, 18 Apr 2013 10:10:09 -0700
From:	Dave Hansen <dave@...1.net>
To:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
CC:	akpm@...ux-foundation.org, mgorman@...e.de,
	matthew.garrett@...ula.com, rientjes@...gle.com, riel@...hat.com,
	arjan@...ux.intel.com, srinivas.pandruvada@...ux.intel.com,
	maxime.coquelin@...ricsson.com, loic.pallardy@...ricsson.com,
	kamezawa.hiroyu@...fujitsu.com, lenb@...nel.org, rjw@...k.pl,
	gargankita@...il.com, paulmck@...ux.vnet.ibm.com,
	amit.kachhap@...aro.org, svaidy@...ux.vnet.ibm.com,
	andi@...stfloor.org, wujianguo@...wei.com, kmpark@...radead.org,
	thomas.abraham@...aro.org, santosh.shilimkar@...com,
	linux-pm@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 00/15][Sorted-buddy] mm: Memory Power Management

On 04/09/2013 02:45 PM, Srivatsa S. Bhat wrote:
> 2. Performance overhead is expected to be low: Since we retain the simplicity
>    of the algorithm in the page allocation path, page allocation can
>    potentially remain as fast as it would be without memory regions. The
>    overhead is pushed to the page-freeing paths which are not that critical.

Numbers, please.  The problem with pushing the overhead to frees is that
they, believe it or not, actually average out to the same as the number
of allocs.  Think kernel compile, or a large dd.  Both of those churn
through a lot of memory, and both do an awful lot of allocs _and_ frees.
 We need to know both the overhead on a system that does *no* memory
power management, and the overhead on a system which is carved and
actually using this code.

> Kernbench results didn't show any noticeable performance degradation with
> this patchset as compared to vanilla 3.9-rc5.

Surely this code isn't magical and there's overhead _somewhere_, and
such overhead can be quantified _somehow_.  Have you made an effort to
find those cases, even with microbenchmarks?

I still also want to see some hard numbers on:
> However, memory consumes a significant amount of power, potentially upto
> more than a third of total system power on server systems.
and
> It had been demonstrated on a Samsung Exynos board
> (with 2 GB RAM) that upto 6 percent of total system power can be saved by
> making the Linux kernel MM subsystem power-aware[4]. 

That was *NOT* with this code, and it's nearing being two years old.
What can *this* *patch* do?

I think there are three scenarios to look at.  Let's say you have an 8GB
system with 1GB regions:
1. Normal unpatched kernel, booted with  mem=1G...8G (in 1GB increments
   perhaps) running some benchmark which sees performance scale with
   the amount of memory present in the system.
2. Kernel patched with this set, running the same test, but with single
   memory regions.
3. Kernel patched with this set.  But, instead of using mem=, you run
   it trying to evacuate equivalent amount of memory to the amounts you
   removed using mem=.

That will tell us both what the overhead is, and how effective it is.
I'd much rather see actual numbers and a description of the test than
some hand waving that it "didn't show any noticeable performance
degradation".

The amount of code here isn't huge.  But, it sucks that it's bloating
the already quite plump page_alloc.c.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ