lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 20 Aug 2016 09:48:39 +1000
From:   Dave Chinner <david@...morbit.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.cz>,
        Minchan Kim <minchan@...nel.org>,
        Vladimir Davydov <vdavydov@...tuozzo.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Bob Peterson <rpeterso@...hat.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        "Huang, Ying" <ying.huang@...el.com>,
        Christoph Hellwig <hch@....de>,
        Wu Fengguang <fengguang.wu@...el.com>, LKP <lkp@...org>,
        Tejun Heo <tj@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [LKP] [lkp] [xfs] 68a9f5e700: aim7.jobs-per-min -13.6% regression

On Fri, Aug 19, 2016 at 11:49:46AM +0100, Mel Gorman wrote:
> On Thu, Aug 18, 2016 at 03:25:40PM -0700, Linus Torvalds wrote:
> > It *could* be as simple/stupid as just saying "let's allocate the page
> > cache for new pages from the current node" - and if the process that
> > dirties pages just stays around on one single node, that might already
> > be sufficient.
> > 
> > So just for testing purposes, you could try changing that
> > 
> >         return alloc_pages(gfp, 0);
> > 
> > in __page_cache_alloc() into something like
> > 
> >         return alloc_pages_node(cpu_to_node(raw_smp_processor_id())), gfp, 0);
> > 
> > or something.
> > 
> 
> The test would be interesting but I believe that keeping heavy writers
> on one node will force them to stall early on dirty balancing even if
> there is plenty of free memory on other nodes.

Well, it depends on the speed of the storage. The higher the speed
of the storage, the less we care about stalling on dirty pages
during reclaim. i.e. faster storage == shorter stalls. We really
should stop thinking we need to optimise reclaim purely for the
benefit of slow disks.  500MB/s write speed with latencies of a
under a couple of milliseconds is common hardware these days. pcie
based storage (e.g. m2, nvme) is rapidly becoming commonplace and
they can easily do 1-2GB/s write speeds.

The fast storage devices that are arriving need to be treated
more like a fast network device (e.g. a pci-e 4x nvme SSD has the
throughput of 2x10GbE devices). We have to consider if buffering
streaming data in the page cache for any longer than it takes to get
the data to userspace or to disk is worth the cost of reclaiming it
from the page cache.

Really, the question that needs to be answered is this: if we can
pull data from the storage at similar speeds and latencies as we can
from the page cache, then *why are we caching that data*?

We've already made that "don't cache for fast storage" decision in
the case of pmem - the DAX IO path is slowly moving towards making
full use of the mapping infrastructure for all it's tracking
requirements. pcie based storage is a bit slower than pmem, but
the principle is the same - the storage is sufficiently fast that
caching only really makes sense for data that is really hot...

I think the underlying principle here is that the faster the backing
device, the less we should cache and buffer the device in the OS. I
suspect a good initial approximation of "stickiness" for the page
cache would the speed of writeback as measured by the BDI underlying
the mapping....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ