[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFpHawL2UNUNC_pWE-7Su_nhJkTNUpJf9yyNJygGhi-uPw@mail.gmail.com>
Date: Tue, 28 Nov 2017 12:20:58 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Adrian Hunter <adrian.hunter@...el.com>
Cc: linux-mmc <linux-mmc@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Bough Chen <haibo.chen@....com>,
Alex Lemberg <alex.lemberg@...disk.com>,
Mateusz Nowak <mateusz.nowak@...el.com>,
Yuliy Izrailov <Yuliy.Izrailov@...disk.com>,
Jaehoon Chung <jh80.chung@...sung.com>,
Dong Aisheng <dongas86@...il.com>,
Das Asutosh <asutoshd@...eaurora.org>,
Zhangfei Gao <zhangfei.gao@...il.com>,
Sahitya Tummala <stummala@...eaurora.org>,
Harjani Ritesh <riteshh@...eaurora.org>,
Venu Byravarasu <vbyravarasu@...dia.com>,
Linus Walleij <linus.walleij@...aro.org>,
Shawn Lin <shawn.lin@...k-chips.com>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH V14 14/24] mmc: block: Add CQE support
On 21 November 2017 at 14:42, Adrian Hunter <adrian.hunter@...el.com> wrote:
> Add CQE support to the block driver, including:
> - optionally using DCMD for flush requests
> - "manually" issuing discard requests
> - issuing read / write requests to the CQE
> - supporting block-layer timeouts
> - handling recovery
> - supporting re-tuning
>
> CQE offers 25% - 50% better random multi-threaded I/O. There is a slight
> (e.g. 2%) drop in sequential read speed but no observable change to sequential
> write.
>
> CQE automatically sends the commands to complete requests. However it only
> supports reads / writes and so-called "direct commands" (DCMD). Furthermore
> DCMD is limited to one command at a time, but discards require 3 commands.
> That makes issuing discards through CQE very awkward, but some CQE's don't
> support DCMD anyway. So for discards, the existing non-CQE approach is
> taken, where the mmc core code issues the 3 commands one at a time i.e.
> mmc_erase(). Where DCMD is used, is for issuing flushes.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@...el.com>
This looks good to me!
I only have one, very minor comment.
[...]
> @@ -370,10 +514,14 @@ static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth,
> static int mmc_mq_init(struct mmc_queue *mq, struct mmc_card *card,
> spinlock_t *lock)
> {
> + struct mmc_host *host = card->host;
> int q_depth;
> int ret;
>
> - q_depth = MMC_QUEUE_DEPTH;
> + if (mq->use_cqe)
> + q_depth = min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth);
To make it clear why this is needed, could you please add some comment
in the code?
As I was trying to point out in the other reply about queue depth, for
patch 13, this is weird to me.
This may mean that we end up using queue_depth being less than
MMC_QUEUE_DEPTH (64) for the CQE case. While in fact, in the CQE case
the HW actually supports a bigger queue depth, comparing when not
using CQE.
Anyway, it seems like that will have to be a separate topic to discuss
with the blkmq experts.
> + else
> + q_depth = MMC_QUEUE_DEPTH;
>
[...]
Kind regards
Uffe
Powered by blists - more mailing lists