lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPDyKFqgQmvdmXe8Sxnv2E5EY9cose+E2pBK3r0P_OzqAC79dg@mail.gmail.com>
Date:   Mon, 28 Aug 2023 11:04:54 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     sharp.xia@...iatek.com, Shawn Lin <shawn.lin@...k-chips.com>
Cc:     angelogioacchino.delregno@...labora.com,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mediatek@...ts.infradead.org, linux-mmc@...r.kernel.org,
        matthias.bgg@...il.com, wsd_upstream@...iatek.com
Subject: Re: [PATCH 1/1] mmc: Set optimal I/O size when mmc_setip_queue

On Mon, 28 Aug 2023 at 04:28, Shawn Lin <shawn.lin@...k-chips.com> wrote:
>
> Hi Sharp
>
> On 2023/8/27 0:26, Sharp.Xia@...iatek.com wrote:
> > On Fri, 2023-08-25 at 17:17 +0800, Shawn Lin wrote:
> >>
> >>
>
> After more testing, most of my platforms which runs at HS400/HS200 mode
> shows nearly no differences with the readahead ranging from 128 to 1024.
> Yet just a board shows a performance drop now. Highly suspect it's eMMC
> chip depends. I would recommand leave it to the BSP guys to decide which
> readahead value is best for their usage.

That's a very good point. The SD/eMMC card certainly behaves
differently, depending on the request-size.

Another thing we could consider doing, could be to combine the
information about the request-size from the mmc host, with some
relevant information from the registers in the card (not sure exactly
what though).

>
> >
> > I tested with RK3568 and sdhci-of-dwcmshc.c driver, the performance improved by 2~3%.
> >
> > Before:
> > root@...nWrt:/mnt/mmcblk0p3# time dd if=test.img of=/dev/null
> > 2097152+0 records in
> > 2097152+0 records out
> > real    0m 6.01s
> > user    0m 0.84s
> > sys     0m 2.89s
> > root@...nWrt:/mnt/mmcblk0p3# cat /sys/block/mmcblk0/queue/read_ahead_kb
> > 128
> >
> > After:
> > root@...nWrt:/mnt/mmcblk0p3# echo 3 > /proc/sys/vm/drop_caches
> > root@...nWrt:/mnt/mmcblk0p3# time dd if=test.img of=/dev/null
> > 2097152+0 records in
> > 2097152+0 records out
> > real    0m 5.86s
> > user    0m 1.04s
> > sys     0m 3.18s
> > root@...nWrt:/mnt/mmcblk0p3# cat /sys/block/mmcblk0/queue/read_ahead_kb
> > 1024
> >
> > root@...nWrt:/sys/kernel/debug/mmc0# cat ios
> > clock:          200000000 Hz
> > actual clock:   200000000 Hz
> > vdd:            18 (3.0 ~ 3.1 V)
> > bus mode:       2 (push-pull)
> > chip select:    0 (don't care)
> > power mode:     2 (on)
> > bus width:      3 (8 bits)
> > timing spec:    9 (mmc HS200)
> > signal voltage: 1 (1.80 V)
> > driver type:    0 (driver type B)
> >

Thanks for testing and sharing the data, both of you!

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ