[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180827214242.GH2234@dastard>
Date: Tue, 28 Aug 2018 07:42:42 +1000
From: Dave Chinner <david@...morbit.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Waiman Long <longman@...hat.com>,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] xfs: Prevent multiple wakeups of the same log
space waiter
On Mon, Aug 27, 2018 at 12:39:06AM -0700, Christoph Hellwig wrote:
> On Mon, Aug 27, 2018 at 10:21:34AM +1000, Dave Chinner wrote:
> > tl; dr: Once you pass a certain point, ramdisks can be *much* slower
> > than SSDs on journal intensive workloads like AIM7. Hence it would be
> > useful to see if you have the same problems on, say, high
> > performance nvme SSDs.
>
> Note that all these ramdisk issues you mentioned below will also apply
> to using the pmem driver on nvdimms, which might be a more realistic
> version. Even worse at least for cases where the nvdimms aren't
> actually powerfail dram of some sort with write through caching and
> ADR the latency is going to be much higher than the ramdisk as well.
Yes, I realise that.
I am expecting that when it comes to optimising for pmem, we'll
actually rewrite the journal to map pmem and memcpy() directly
rather than go through the buffering and IO layers we currently do
so we can minimise write latency and control concurrency ourselves.
Hence I'm not really concerned by performance issues with pmem at
this point - most of our still users have traditional storage and
will for a long time to come....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists