[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5253E06C.6040101@gmail.com>
Date: Tue, 08 Oct 2013 19:37:32 +0900
From: Akira Hayakawa <ruby.wktk@...il.com>
To: hch@...radead.org
CC: dm-devel@...hat.com, devel@...verdev.osuosl.org,
thornber@...hat.com, snitzer@...hat.com,
gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
mpatocka@...hat.com, dan.carpenter@...cle.com, joe@...ches.com,
akpm@...ux-foundation.org, m.chehab@...sung.com, ejt@...hat.com,
agk@...hat.com, cesarb@...arb.net, ruby.wktk@...il.com
Subject: Re: [dm-devel] Reworking dm-writeboost [was: Re: staging: Add dm-writeboost]
Christoph,
> You can detect O_DIRECT writes by second guession a special combination
> of REQ_ flags only used there, as cfg tries to treat it special:
>
> #define WRITE_SYNC (WRITE | REQ_SYNC | REQ_NOIDLE)
> #define WRITE_ODIRECT (WRITE | REQ_SYNC)
>
> the lack of REQ_NOIDLE when REQ_SYNC is set gives it away. Not related
> to the FLUSH or FUA flags in any way, though.
Thanks.
But, our problem is to detect the bio may or may not be deferred.
The flag REQ_NOIDLE is the one?
> Akira, can you explain the workloads where your delay of FLUSH or FUA
> requests helps you in any way? I very much agree with Dave's reasoning,
> but if you found workloads where your hack helps we should make sure we
> fix them at the place where they are issued.
One of the examples is a fileserver accessed by multiple users.
A barrier is submitted when a user closes a file for example.
As I said in my previous post
https://lkml.org/lkml/2013/10/4/186
writeboost has RAM buffer and we want one to be
fulfilled with writes and then flushed to the cache device
that takes all the barriers away with the completion.
In that case we pay the minimum penalty for the barriers.
Interestingly, writeboost is happy with a lot of writes.
By deferring these barriers (FLUSH and FUA)
multiple barriers are likely to be merged on a RAM buffer
and then processed by replacing with only one FLUSH.
Merging the barriers and replacing it with a single FLUSH
by accepting a lot of writes
is the reason for deferring barriers in writeboost.
If you want to know further I recommend you to
look at the source code to see
how queue_barrier_io() is used and
how the barriers are kidnapped in queue_flushing().
Akira
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists