[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR2101MB0729D163B903A0198A85AB4BCE900@MWHPR2101MB0729.namprd21.prod.outlook.com>
Date: Fri, 18 May 2018 20:58:08 +0000
From: Long Li <longli@...rosoft.com>
To: Steve French <smfrench@...il.com>
CC: Christoph Hellwig <hch@...radead.org>,
Steve French <sfrench@...ba.org>,
"linux-cifs@...r.kernel.org" <linux-cifs@...r.kernel.org>,
"samba-technical@...ts.samba.org" <samba-technical@...ts.samba.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: RE: [RFC PATCH 09/09] Introduce cache=rdma moutning option
> Subject: Re: [RFC PATCH 09/09] Introduce cache=rdma moutning option
>
> On Fri, May 18, 2018 at 12:00 PM, Long Li via samba-technical <samba-
> technical@...ts.samba.org> wrote:
> >> Subject: Re: [RFC PATCH 09/09] Introduce cache=rdma moutning option
> >>
> >> On Thu, May 17, 2018 at 05:22:14PM -0700, Long Li wrote:
> >> > From: Long Li <longli@...rosoft.com>
> >> >
> >> > When cache=rdma is enabled on mount options, CIFS do not allocate
> >> > internal data buffer pages for I/O, data is read/writen directly to
> >> > user
> >> memory via RDMA.
> >>
> >> I don't think this should be an option. For direct I/O without
> >> signing or encryption CIFS should always use get_user_pages, with or
> without RDMA.
> >
> > Yes this should be done for all transport. If there are no objections, I'll send
> patches to change this.
>
> Would this help/change performance much?
On RDMA, it helps with I/O latency and reduces CPU usage on certain I/O patterns.
But I haven't tested on TCP. Maybe it will help a little bit.
>
>
>
> --
> Thanks,
>
> Steve
Powered by blists - more mailing lists