lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100309102345.GG8653@laptop>
Date:	Tue, 9 Mar 2010 21:23:45 +1100
From:	Nick Piggin <npiggin@...e.de>
To:	Mel Gorman <mel@....ul.ie>
Cc:	linux-mm@...ck.org,
	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>,
	Chris Mason <chris.mason@...cle.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] page-allocator: Check zone pressure when batch of
 pages are freed

On Tue, Mar 09, 2010 at 10:08:35AM +0000, Mel Gorman wrote:
> On Tue, Mar 09, 2010 at 08:53:42PM +1100, Nick Piggin wrote:
> > Cool, you found this doesn't hurt performance too much?
> > 
> 
> Nothing outside the noise was measured. I didn't profile it to be
> absolutly sure but I expect it's ok.

OK. Moving the waitqueue cacheline out of the fastpath footprint
and doing the flag thing might be a good idea?

 
> > Can't you remove the check from the reclaim code now? (The check
> > here should give a more timely wait anyway)
> > 
> 
> I'll try and see what the timing and total IO figures look like.

Well reclaim goes through free_pages_bulk anyway, doesn't it? So
I don't see why you would have to run any test.

 
> > This is good because it should eliminate most all cases of extra
> > waiting. I wonder if you've also thought of doing the check in the
> > allocation path too as we were discussing? (this would give a better
> > FIFO behaviour under memory pressure but I could easily agree it is not
> > worth the cost)
> > 
> 
> I *could* make the check but as I noted in the leader, there isn't
> really a good test case that determines if these changes are "good" or
> "bad". Removing congestion_wait() seems like an obvious win but other
> modifications that alter how and when processes wait are less obvious.

Fair enough. But we could be sure it increases fairness, which is a
good thing. So then we'd just have to check it against performance.

Your patches seem like a good idea regardless of this issue, don't get
me wrong.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ