[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAByjrT-9GZs=zdWaT+_ZhV-q05P27jB16xHJGnP3KC5tNJsY+A@mail.gmail.com>
Date: Wed, 22 Jan 2020 19:33:07 -0800
From: Muraliraja Muniraju <muraliraja.muniraju@...rik.com>
To: Ming Lei <tom.leiming@...il.com>
Cc: Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>,
Jens Axboe <axboe@...nel.dk>,
linux-block <linux-block@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Re [PATCH] Adding multiple workers to the loop device.
I used dd to test
dd if=/tmp/mount/home/ubuntu/os_disk_partition/genesisTool/4Fz
of=/dev/null bs=1M count=53687091200 skip=0
iflag=skip_bytes,count_bytes,direct &
dd if=/tmp/mount/home/ubuntu/os_disk_partition/genesisTool/4Fz
of=/dev/null bs=1M count=53687091200 skip=53687091200
iflag=skip_bytes,count_bytes,direct &
dd if=/tmp/mount/home/ubuntu/os_disk_partition/genesisTool/4Fz
of=/dev/null bs=1M count=53687091200 skip=107374182400
iflag=skip_bytes,count_bytes,direct &
dd if=/tmp/mount/home/ubuntu/os_disk_partition/genesisTool/4Fz
of=/dev/null bs=1M count=53687091200 skip=161061273600
iflag=skip_bytes,count_bytes,direct &
Here the file /tmp/mount/home/ubuntu/os_disk_partition/genesisTool/4Fz
is a file in the ext4 file system that is accessed via a loop device.
Also in the above change we have the default version to be always 1
worker. One can change the number of workers to handle the performance
to their needs. In our case we saw that we got good performance with 4
threads even for sequential io.
On Wed, Jan 22, 2020 at 5:40 PM Ming Lei <tom.leiming@...il.com> wrote:
>
> On Wed, Jan 22, 2020 at 4:11 AM muraliraja.muniraju
> <muraliraja.muniraju@...rik.com> wrote:
> >
> > Below is the dd results that I ran with the worker and without the worker changes.
> > Enhanced Loop has the changes and ran with 1,2,3,4 workers with 4 dds running on the same loop device.
> > Normal Loop is 1 worker(the existing code) with 4 dd's running on the same loop device.
> > Enhanced loop
> > 1 - READ: io=21981MB, aggrb=187558KB/s, minb=187558KB/s, maxb=187558KB/s, mint=120008msec, maxt=120008msec
> > 2 - READ: io=41109MB, aggrb=350785KB/s, minb=350785KB/s, maxb=350785KB/s, mint=120004msec, maxt=120004msec
> > 3 - READ: io=45927MB, aggrb=391802KB/s, minb=391802KB/s, maxb=391802KB/s, mint=120033msec, maxt=120033msec
> > 4 - READ: io=45771MB, aggrb=390543KB/s, minb=390543KB/s, maxb=390543KB/s, mint=120011msec, maxt=120011msec
> > Normal loop
> > 1 - READ: io=18432MB, aggrb=157201KB/s, minb=157201KB/s, maxb=157201KB/s, mint=120065msec, maxt=120065msec
> > 2 - READ: io=18762MB, aggrb=160035KB/s, minb=160035KB/s, maxb=160035KB/s, mint=120050msec, maxt=120050msec
> > 3 - READ: io=18174MB, aggrb=155058KB/s, minb=155058KB/s, maxb=155058KB/s, mint=120020msec, maxt=120020msec
> > 4 - READ: io=20559MB, aggrb=175407KB/s, minb=175407KB/s, maxb=175407KB/s, mint=120020msec, maxt=120020msec
>
> Could you share your exact test command?
>
> Multiple jobs may hurt performance in case of sequential IOs on HDD backend.
> Also the 1st version of the loop dio patch uses normal wq, I remembered that
> random IOperformance isn't improved much, meantime sequential IO perf drops
> with normal wq, whentesting SSD backend.
>
> So I took kthread worker.
>
> Thanks,
> Ming Lei
Powered by blists - more mailing lists