[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAKywueT0Q9WkANNsg8cEDwGZSMaaE5c4LHuEeMhVDzJAzycroQ@mail.gmail.com>
Date: Fri, 10 Apr 2020 11:25:01 -0700
From: Pavel Shilovsky <piastryyy@...il.com>
To: Jones Syue <jonessyue@...p.com>
Cc: linux-cifs <linux-cifs@...r.kernel.org>,
Samba Technical <samba-technical@...ts.samba.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cifs: improve read performance for page size 64KB &
cache=strict & vers=2.1+
пт, 10 апр. 2020 г. в 09:50, Jones Syue via samba-technical
<samba-technical@...ts.samba.org>:
>
> Hello list,
>
> please help review whether the attached patch is correct, thank you.
>
> Found a read performance issue when linux kernel page size is 64KB.
> If linux kernel page size is 64KB and mount options cache=strict &
> vers=2.1+,
> it does not support cifs_readpages(). Instead, it is using cifs_readpage()
> and
> cifs_read() with maximum read IO size 16KB, which is much slower than read
> IO
> size 1MB when negotiated SMB 2.1+. Since modern SMB server supported SMB
> 2.1+
> and Max Read Size can reach more than 64KB (for example 1MB ~ 8MB), this
> patch
> do one more check on max_read to determine whether server support
> readpages()
> and improve read performance for page size 64KB & cache=strict & vers=2.1+.
>
> The client is a linux box with linux kernel 4.2.8,
> page size 64KB (CONFIG_ARM64_64K_PAGES=y),
> cpu arm 1.7GHz, and use mount.cifs as smb client.
> The server is another linux box with linux kernel 4.2.8,
> share a file '10G.img' with size 10GB,
> and use samba-4.7.12 as smb server.
>
> The client mount a share from the server with different
> cache options: cache=strict and cache=none,
> mount -tcifs //<server_ip>/Public /cache_strict
> -overs=3.0,cache=strict,username=<xxx>,password=<yyy>
> mount -tcifs //<server_ip>/Public /cache_none
> -overs=3.0,cache=none,username=<xxx>,password=<yyy>
>
> The client download a 10GbE file from the server accross 1GbE network,
> dd if=/cache_strict/10G.img of=/dev/null bs=1M count=10240
> dd if=/cache_none/10G.img of=/dev/null bs=1M count=10240
>
> Found that cache=strict (without patch) is slower read throughput and
> smaller
> read IO size than cache=none.
> cache=strict (without patch): read throughput 40MB/s, read IO size is 16KB
> cache=strict (with patch): read throughput 113MB/s, read IO size is 1MB
> cache=none: read throughput 109MB/s, read IO size is 1MB
>
> Looks like if page size is 64KB,
> cifs_set_ops() would use cifs_addr_ops_smallbuf instead of cifs_addr_ops,
>
> /* check if server can support readpages */
> if (cifs_sb_master_tcon(cifs_sb)->ses->server->maxBuf <
> PAGE_SIZE + MAX_CIFS_HDR_SIZE)
> inode->i_data.a_ops = &cifs_addr_ops_smallbuf;
> else
> inode->i_data.a_ops = &cifs_addr_ops;
>
> maxBuf is came from 2 places, SMB2_negotiate() and CIFSSMBNegotiate(),
> (SMB2_MAX_BUFFER_SIZE is 64KB)
> SMB2_negotiate():
> /* set it to the maximum buffer size value we can send with 1 credit */
> server->maxBuf = min_t(unsigned int, le32_to_cpu(rsp->MaxTransactSize),
> SMB2_MAX_BUFFER_SIZE);
> CIFSSMBNegotiate():
> server->maxBuf = le32_to_cpu(pSMBr->MaxBufferSize);
>
> Page size 64KB and cache=strict lead to read_pages() use cifs_readpage()
> instead
> of cifs_readpages(), and then cifs_read() using maximum read IO size 16KB,
> which is much slower than maximum read IO size 1MB.
> (CIFSMaxBufSize is 16KB by default)
>
> /* FIXME: set up handlers for larger reads and/or convert to async */
> rsize = min_t(unsigned int, cifs_sb->rsize, CIFSMaxBufSize);
>
Hi Jones,
Thanks for the patch!
It will work although it is probably a little bit cleaner to
initialize server->max_read to server->maxBuf for SMB1 and use the
server->max_read in the readpages condition check instead.
@Others, thoughts?
--
Best regards,
Pavel Shilovsky
Powered by blists - more mailing lists