lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130703084715.GF1875@suse.de>
Date:	Wed, 3 Jul 2013 09:47:15 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Dave Chinner <david@...morbit.com>,
	Rob van der Heij <rvdheij@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Yannick Brosseau <yannick.brosseau@...il.com>,
	stable@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	"lttng-dev@...ts.lttng.org" <lttng-dev@...ts.lttng.org>
Subject: Re: [-stable 3.8.1 performance regression] madvise
 POSIX_FADV_DONTNEED

On Tue, Jul 02, 2013 at 08:55:14PM -0400, Mathieu Desnoyers wrote:
> * Mathieu Desnoyers (mathieu.desnoyers@...icios.com) wrote:
> > * Dave Chinner (david@...morbit.com) wrote:
> > > On Thu, Jun 20, 2013 at 08:20:16AM -0400, Mathieu Desnoyers wrote:
> > > > * Rob van der Heij (rvdheij@...il.com) wrote:
> > > > > Wouldn't you batch the calls to drop the pages from cache rather than drop
> > > > > one packet at a time?
> > > > 
> > > > By default for kernel tracing, lttng's trace packets are 1MB, so I
> > > > consider the call to fadvise to be already batched by applying it to 1MB
> > > > packets rather than indivitual pages. Even there, it seems that the
> > > > extra overhead added by the lru drain on each CPU is noticeable.
> > > > 
> > > > Another reason for not batching this in larger chunks is to limit the
> > > > impact of the tracer on the kernel page cache. LTTng limits itself to
> > > > its own set of buffers, and use the page cache for what is absolutely
> > > > needed to perform I/O, but no more.
> > > 
> > > I think you are doing it wrong. This is a poster child case for
> > > using Direct IO and completely avoiding the page cache altogether....
> > 
> > I just tried replacing my sync_file_range()+fadvise() calls and instead
> > pass the O_DIRECT flag to open(). Unfortunately, I must be doing
> > something very wrong, because I get only 1/3rd of the throughput, and
> > the page cache fills up. Any idea why ?
> 
> Since O_DIRECT does not seem to provide acceptable throughput, it may be
> interesting to investigate other ways to lessen the latency impact of
> the fadvise DONTNEED hint.
> 

There are cases where O_DIRECT falls back to buffered IO which is why you
might have found that page cache was still filling up. There are a few
reasons why this can happen but I would guess the common cause is that
the range of pages being written was in the page cache already and could
not be invalidated for some reason. I'm guessing this is the common case
for page cache filling even with O_DIRECT but would not bet money on it
as it's not a problem I investigated before.

> Given it is just a hint, we should be allowed to perform page
> deactivation lazily. Is there any fundamental reason to wait for worker
> threads on each CPU to complete their lru drain before returning from
> fadvise() to user-space ?
> 

Only to make sure they pages are actually dropped as requested. The reason
the wait was introduced in the first place was that page cache was filling
up even with the fadvise calls and causing disruption. In 3.11 disruption
due to this sort of parallel IO should be reduced but making fadvise work
properly is reasonable in itself. Was that patch I posted ever tested or
did I manage to miss it?

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ