[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b02d1b2-be0e-0d1d-7ac3-38d32e44e77e@talpey.com>
Date: Fri, 9 Apr 2021 11:32:41 -0400
From: Tom Talpey <tom@...pey.com>
To: Chuck Lever III <chuck.lever@...cle.com>
Cc: Jason Gunthorpe <jgg@...dia.com>, Christoph Hellwig <hch@....de>,
Leon Romanovsky <leon@...nel.org>,
Doug Ledford <dledford@...hat.com>,
Leon Romanovsky <leonro@...dia.com>,
Adit Ranadive <aditr@...are.com>,
Anna Schumaker <anna.schumaker@...app.com>,
Ariel Elior <aelior@...vell.com>,
Avihai Horon <avihaih@...dia.com>,
Bart Van Assche <bvanassche@....org>,
Bernard Metzler <bmt@...ich.ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
Devesh Sharma <devesh.sharma@...adcom.com>,
Faisal Latif <faisal.latif@...el.com>,
Jack Wang <jinpu.wang@...os.com>,
Jakub Kicinski <kuba@...nel.org>,
Bruce Fields <bfields@...ldses.org>, Jens Axboe <axboe@...com>,
Karsten Graul <kgraul@...ux.ibm.com>,
Keith Busch <kbusch@...nel.org>, Lijun Ou <oulijun@...wei.com>,
CIFS <linux-cifs@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
linux-rdma <linux-rdma@...r.kernel.org>,
"linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
Max Gurtovoy <maxg@...lanox.com>,
Max Gurtovoy <mgurtovoy@...dia.com>,
"Md. Haris Iqbal" <haris.iqbal@...os.com>,
Michael Guralnik <michaelgur@...dia.com>,
Michal Kalderon <mkalderon@...vell.com>,
Mike Marciniszyn <mike.marciniszyn@...nelisnetworks.com>,
Naresh Kumar PBS <nareshkumar.pbs@...adcom.com>,
Linux-Net <netdev@...r.kernel.org>,
Potnuri Bharat Teja <bharat@...lsio.com>,
"rds-devel@....oracle.com" <rds-devel@....oracle.com>,
Sagi Grimberg <sagi@...mberg.me>,
"samba-technical@...ts.samba.org" <samba-technical@...ts.samba.org>,
Santosh Shilimkar <santosh.shilimkar@...cle.com>,
Selvin Xavier <selvin.xavier@...adcom.com>,
Shiraz Saleem <shiraz.saleem@...el.com>,
Somnath Kotur <somnath.kotur@...adcom.com>,
Sriharsha Basavapatna <sriharsha.basavapatna@...adcom.com>,
Steve French <sfrench@...ba.org>,
Trond Myklebust <trond.myklebust@...merspace.com>,
VMware PV-Drivers <pv-drivers@...are.com>,
Weihang Li <liweihang@...wei.com>,
Yishai Hadas <yishaih@...dia.com>,
Zhu Yanjun <zyjzyj2000@...il.com>
Subject: Re: [PATCH rdma-next 00/10] Enable relaxed ordering for ULPs
On 4/9/2021 10:45 AM, Chuck Lever III wrote:
>
>
>> On Apr 9, 2021, at 10:26 AM, Tom Talpey <tom@...pey.com> wrote:
>>
>> On 4/6/2021 7:49 AM, Jason Gunthorpe wrote:
>>> On Mon, Apr 05, 2021 at 11:42:31PM +0000, Chuck Lever III wrote:
>>>
>>>> We need to get a better idea what correctness testing has been done,
>>>> and whether positive correctness testing results can be replicated
>>>> on a variety of platforms.
>>> RO has been rolling out slowly on mlx5 over a few years and storage
>>> ULPs are the last to change. eg the mlx5 ethernet driver has had RO
>>> turned on for a long time, userspace HPC applications have been using
>>> it for a while now too.
>>
>> I'd love to see RO be used more, it was always something the RDMA
>> specs supported and carefully architected for. My only concern is
>> that it's difficult to get right, especially when the platforms
>> have been running strictly-ordered for so long. The ULPs need
>> testing, and a lot of it.
>>
>>> We know there are platforms with broken RO implementations (like
>>> Haswell) but the kernel is supposed to globally turn off RO on all
>>> those cases. I'd be a bit surprised if we discover any more from this
>>> series.
>>> On the other hand there are platforms that get huge speed ups from
>>> turning this on, AMD is one example, there are a bunch in the ARM
>>> world too.
>>
>> My belief is that the biggest risk is from situations where completions
>> are batched, and therefore polling is used to detect them without
>> interrupts (which explicitly). The RO pipeline will completely reorder
>> DMA writes, and consumers which infer ordering from memory contents may
>> break. This can even apply within the provider code, which may attempt
>> to poll WR and CQ structures, and be tripped up.
>
> You are referring specifically to RPC/RDMA depending on Receive
> completions to guarantee that previous RDMA Writes have been
> retired? Or is there a particular implementation practice in
> the Linux RPC/RDMA code that worries you?
Nothing in the RPC/RDMA code, which is IMO correct. The worry, which
is hopefully unfounded, is that the RO pipeline might not have flushed
when a completion is posted *after* posting an interrupt.
Something like this...
RDMA Write arrives
PCIe RO Write for data
PCIe RO Write for data
...
RDMA Write arrives
PCIe RO Write for data
...
RDMA Send arrives
PCIe RO Write for receive data
PCIe RO Write for receive descriptor
PCIe interrupt (flushes RO pipeline for all three ops above)
RPC/RDMA polls CQ
Reaps receive completion
RDMA Send arrives
PCIe RO Write for receive data
PCIe RO write for receive descriptor
Does *not* interrupt, since CQ not armed
RPC/RDMA continues to poll CQ
Reaps receive completion
PCIe RO writes not yet flushed
Processes incomplete in-memory data
Bzzzt
Hopefully, the adapter performs a PCIe flushing read, or something
to avoid this when an interrupt is not generated. Alternatively, I'm
overly paranoid.
Tom.
>> The Mellanox adapter, itself, historically has strict in-order DMA
>> semantics, and while it's great to relax that, changing it by default
>> for all consumers is something to consider very cautiously.
>>
>>> Still, obviously people should test on the platforms they have.
>>
>> Yes, and "test" be taken seriously with focus on ULP data integrity.
>> Speedups will mean nothing if the data is ever damaged.
>
> I agree that data integrity comes first.
>
> Since I currently don't have facilities to test RO in my lab, the
> community will have to agree on a set of tests and expected results
> that specifically exercise the corner cases you are concerned about.
>
>
> --
> Chuck Lever
>
>
>
>
Powered by blists - more mailing lists