lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e0a06b0-3777-e560-1943-3e0c1e022039@cmss.chinamobile.com>
Date:   Tue, 28 Feb 2017 17:13:37 +0800
From:   Xiubo Li <lixiubo@...s.chinamobile.com>
To:     Mike Christie <mchristi@...hat.com>,
        Andy Grover <agrover@...hat.com>, nab@...ux-iscsi.org,
        shli@...nel.org
Cc:     hch@....de, sheng@...ker.org, namei.unix@...il.com,
        bart.vanassche@...disk.com, linux-scsi@...r.kernel.org,
        target-devel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Jianfei Hu <hujianfei@...s.chinamobile.com>,
        Venky Shankar <vshankar@...hat.com>
Subject: Re: [PATCH] target/user: Add daynmic growing data area featuresupport

>> On 02/17/2017 01:24 AM, lixiubo@...s.chinamobile.com wrote:
>>>> From: Xiubo Li <lixiubo@...s.chinamobile.com>
>>>>
>>>> Currently for the TCMU, the ring buffer size is fixed to 64K cmd
>>>> area + 1M data area, and this will be bottlenecks for high iops.
>> Hi Xiubo, thanks for your work.
>>
>> daynmic -> dynamic
>>
>> Have you benchmarked this patch and determined what kind of iops
>> improvement it allows? Do you see the data area reaching its
>> fully-allocated size?
>>
> I tested this patch with Venky's tcmu-runner rbd aio patches, with one
> 10 gig iscsi session, and for pretty basic fio direct io (64 -256K
> read/writes with a queue depth of 64 numjobs between 1 and 4) tests read
> throughput goes from about 80 to 500 MB/s. Write throughput is pretty
> low at around 150 MB/s.
>
> I did not hit the fully allocated size. I did not drive a lot of IO though.

How about dealing with memories shrinking in patch series followed?

As the initial patch, we could set the cmd area size to 8MB and the
data area size to 512MB. And this could work fine for most cases
without using too much memories.

On my similar test case by using VMs(low iops case) using fio, -bs=[64K,
128K, 512K, 1M] -size=20G, -iodepth 1 -numjobs=10,  the bw of read
increases from about 5200KB/s to about 6100KB/s, and the bw of write
increases from about 3000KB/s to about 3300KB/s.

While bs < 64K(from the log, the maximum of the data length is 64K),
the smaller of it the two bws will be closer.

But for all my test cases, the allocated size is far away from the full size
too.

Thanks,

BRs
Xiubo




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ