lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2009 19:18:18 +0900 (JST)
From:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To:	Mel Gorman <mel@....ul.ie>
Cc:	kosaki.motohiro@...fujitsu.com,
	Corrado Zoccolo <czoccolo@...il.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Frans Pop <elendil@...net.nl>, Jiri Kosina <jkosina@...e.cz>,
	Sven Geggus <lists@...hsschwanzdomain.de>,
	Karol Lewandowski <karol.k.lewandowski@...il.com>,
	Tobias Oetiker <tobi@...iker.ch>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Rik van Riel <riel@...hat.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Stephan von Krawczynski <skraw@...net.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH-RFC] cfq: Disable low_latency by default for 2.6.32

> On Fri, Nov 27, 2009 at 02:58:26PM +0900, KOSAKI Motohiro wrote:
> > > > <SNIP>
> > > > low_latency was tested on other scenarios:
> > > > http://lkml.indiana.edu/hypermail/linux/kernel/0910.0/01410.html
> > > > http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-11/msg04855.html
> > > > where it improved actual and perceived performance, so disabling it
> > > > completely may not be good.
> > > > 
> > > 
> > > It may not indeed.
> > > 
> > > In case you mean a partial disabling of cfq_latency, I'm try the
> > > following patch. The intention is to disable the low_latency logic if
> > > kswapd is at work and presumably needs clean pages. Alternative
> > > suggestions welcome.
> > 
> > I like treat vmscan writeout as special. because
> >   - vmscan use various process context. but it doesn't write own process's page.
> >     IOW, it doesn't so match cfq's io fairness logic.
> >   - plus, the above mean vmscan writeout doesn't need good i/o latency.
> 
> While it might not need good latency as such, it does need pages to be
> clean because direct reclaim has trouble cleaning pages in its own
> behalf.

Well.
if direct reclaim need lumpy reclaim, you are right.

In no lupy case, vmscan start pageout and move the page list tail typically.
cleaned page will be used by another task.

---------------------------------------------------------------------------------------
static unsigned long shrink_page_list(struct list_head *page_list,
                                      struct list_head *freed_pages_list,
                                        struct scan_control *sc,
                                        enum pageout_io sync_writeback)
{
(snip)
                        switch (pageout(page, mapping, sync_writeback)) {
                        case PAGE_KEEP:
                                goto keep_locked;
                        case PAGE_ACTIVATE:
                                goto activate_locked;
                        case PAGE_SUCCESS:
                                if (PageWriteback(page) || PageDirty(page))
                                        goto keep;                                     ///////  HERE
---------------------------------------------------------------------------------------



> >   - vmscan maintain page granularity lru list. It mean vmscan makes awful
> >     seekful I/O. it assume block-layer buffered much i/o request.
> >   - plus, the above mena vmscan. writeout need good io throughput. otherwise
> >     system might cause hangup.
> > 
> > However, I don't think kswapd_awake is good choice. because
> >   - zone reclaim run before kswapd wakeup. iow, this patch doesn't solve hpc machine.
> >     btw, some Core i7 box (at least, Intel's reference box) also use zone reclaim.
> 
> Good point.
> 
> >   - On large (many memory node) machine, one of much kswapd always run.
> > 
> 
> Also true.
> 
> > 
> > Instead, PF_MEMALLOC is good idea?
> 
> It doesn't work out either because a process with PF_MEMALLOC is in
> direct reclaim and like kswapd, it may not be able to clean the pages at
> all, let alone in a small period of time.

please forget this idea ;)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ