lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Jul 2013 10:03:44 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	Mel Gorman <mgorman@...e.de>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Rob van der Heij <rvdheij@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Yannick Brosseau <yannick.brosseau@...il.com>,
	stable@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	"lttng-dev@...ts.lttng.org" <lttng-dev@...ts.lttng.org>
Subject: Re: [-stable 3.8.1 performance regression] madvise
 POSIX_FADV_DONTNEED

On Wed, Jul 03, 2013 at 10:53:08AM -0400, Jeff Moyer wrote:
> Mel Gorman <mgorman@...e.de> writes:
> 
> >> > I just tried replacing my sync_file_range()+fadvise() calls and instead
> >> > pass the O_DIRECT flag to open(). Unfortunately, I must be doing
> >> > something very wrong, because I get only 1/3rd of the throughput, and
> >> > the page cache fills up. Any idea why ?
> >> 
> >> Since O_DIRECT does not seem to provide acceptable throughput, it may be
> >> interesting to investigate other ways to lessen the latency impact of
> >> the fadvise DONTNEED hint.
> >> 
> >
> > There are cases where O_DIRECT falls back to buffered IO which is why you
> > might have found that page cache was still filling up. There are a few
> > reasons why this can happen but I would guess the common cause is that
> > the range of pages being written was in the page cache already and could
> > not be invalidated for some reason. I'm guessing this is the common case
> > for page cache filling even with O_DIRECT but would not bet money on it
> > as it's not a problem I investigated before.
> 
> Even when O_DIRECT falls back to buffered I/O for writes, it will
> invalidate the page cache range described by the buffered I/O once it
> completes.  For reads, the range is written out synchronously before the
> direct I/O is issued.  Either way, you shouldn't see the page cache
> filling up.

<sigh>

I keep forgetting that filesystems other than XFS have sub-optimal
direct IO implementations. I wish that "silent fallback to buffered
IO" idea had never seen the light of day, and that filesystems
implemented direct IO properly.

> Switching to O_DIRECT often incurs a performance hit, especially if the
> application does not submit more than one I/O at a time.  Remember,
> you're not getting readahead, and you're not getting the benefit of the
> writeback code submitting batches of I/O.

With the way IO is being done, there won't be any readahead (write
only workload) and they are directly controlling writeback one chunk
at a time, so there's not writeback caching to do batching, either.
There's no obvious reason that direct IO should be any slower
assuming that the application is actually doing 1MB sized and
aligned IOs like was mentioned, because both methods are directly
dispatching and then waiting for IO completion.

What filesystem is in use here?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ