[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+Da2qxaCJwZhn0C7VxZzx8TB1VDR_xa2P0cDXUaNA9=YzSJYg@mail.gmail.com>
Date: Fri, 25 Aug 2023 20:23:45 +0800
From: Wenchao Chen <wenchao.chen666@...il.com>
To: Sharp.Xia@...iatek.com
Cc: shawn.lin@...k-chips.com, angelogioacchino.delregno@...labora.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mediatek@...ts.infradead.org, linux-mmc@...r.kernel.org,
matthias.bgg@...il.com, ulf.hansson@...aro.org,
wsd_upstream@...iatek.com
Subject: Re: [PATCH 1/1] mmc: Set optimal I/O size when mmc_setip_queue
On Fri, Aug 25, 2023 at 7:43 PM <Sharp.Xia@...iatek.com> wrote:
>
> On Fri, 2023-08-25 at 16:11 +0800, Shawn Lin wrote:
> >
> > Hi Sharp,
> >
> > On 2023/8/25 15:10, Sharp Xia (夏宇彬) wrote:
> > > On Thu, 2023-08-24 at 12:55 +0200, Ulf Hansson wrote:
> > >>
> > >> External email : Please do not click links or open attachments
> > until
> > >> you have verified the sender or the content.
> > >> On Fri, 18 Aug 2023 at 04:45, <Sharp.Xia@...iatek.com> wrote:
> > >>>
> > >>> From: Sharp Xia <Sharp.Xia@...iatek.com>
> > >>>
> > >>> MMC does not set readahead and uses the default
> > VM_READAHEAD_PAGES
> > >>> resulting in slower reading speed.
> > >>> Use the max_req_size reported by host driver to set the optimal
> > >>> I/O size to improve performance.
> > >>
> > >> This seems reasonable to me. However, it would be nice if you
> > could
> > >> share some performance numbers too - comparing before and after
> > >> $subject patch.
> > >>
> > >> Kind regards
> > >> Uffe
> > >>
> > >>>
> > >>> Signed-off-by: Sharp Xia <Sharp.Xia@...iatek.com>
> > >>> ---
> > >>> drivers/mmc/core/queue.c | 1 +
> > >>> 1 file changed, 1 insertion(+)
> > >>>
> > >>> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> > >>> index b396e3900717..fc83c4917360 100644
> > >>> --- a/drivers/mmc/core/queue.c
> > >>> +++ b/drivers/mmc/core/queue.c
> > >>> @@ -359,6 +359,7 @@ static void mmc_setup_queue(struct mmc_queue
> > >> *mq, struct mmc_card *card)
> > >>> blk_queue_bounce_limit(mq->queue,
> > BLK_BOUNCE_HIGH);
> > >>> blk_queue_max_hw_sectors(mq->queue,
> > >>> min(host->max_blk_count, host->max_req_size /
> > >> 512));
> > >>> + blk_queue_io_opt(mq->queue, host->max_req_size);
> > >>> if (host->can_dma_map_merge)
> > >>> WARN(!blk_queue_can_use_dma_map_merging(mq-
> > >queue,
> > >>> mmc_dev(
> > hos
> > >> t)),
> > >>> --
> > >>> 2.18.0
> > >>>
> > >
> > > I test this patch on internal platform(kernel-5.15).
> >
> > I patched this one and the test shows me a stable 11% performance
> > drop.
> >
> > Before:
> > echo 3 > proc/sys/vm/drop_caches && dd if=/data/1GB.img of=/dev/null
> >
> > 2048000+0 records in
> > 2048000+0 records out
> > 1048576000 bytes (0.9 G) copied, 3.912249 s, 256 M/s
> >
> > After:
> > echo 3 > proc/sys/vm/drop_caches && dd if=/data/1GB.img of=/dev/null
> > 2048000+0 records in
> > 2048000+0 records out
> > 1048576000 bytes (0.9 G) copied, 4.436271 s, 225 M/s
> >
> > >
> > > Before:
> > > console:/ # echo 3 > /proc/sys/vm/drop_caches
> > > console:/ # dd if=/mnt/media_rw/8031-130D/super.img of=/dev/null
> > > 4485393+1 records in
> > > 4485393+1 records out
> > > 2296521564 bytes (2.1 G) copied, 37.124446 s, 59 M/s
> > > console:/ # cat /sys/block/mmcblk0/queue/read_ahead_kb
> > > 128
> > >
> > > After:
> > > console:/ # echo 3 > /proc/sys/vm/drop_caches
> > > console:/ # dd if=/mnt/media_rw/8031-130D/super.img of=/dev/null
> > > 4485393+1 records in
> > > 4485393+1 records out
> > > 2296521564 bytes (2.1 G) copied, 28.956049 s, 76 M/s
> > > console:/ # cat /sys/block/mmcblk0/queue/read_ahead_kb
> > > 1024
> > >
> Hi Shawn,
>
> What is your readahead value before and after applying this patch?
>
Hi Sharp
Use "echo 1024 > sys/block/mmcblk0/queue/read_ahead_kb" instead of
"blk_queue_io_opt(mq->queue, host->max_req_size);"?
Powered by blists - more mailing lists