lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 Jul 2019 15:24:09 +0000
From:   "Bernard Metzler" <BMT@...ich.ibm.com>
To:     "Jason Gunthorpe" <jgg@...pe.ca>
Cc:     "Arnd Bergmann" <arnd@...db.de>,
        "Doug Ledford" <dledford@...hat.com>,
        "Peter Zijlstra" <peterz@...radead.org>,
        linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re:  Re: Re: Re: [PATCH] rdma/siw: avoid smp_store_mb() on a u64

-----"Jason Gunthorpe" <jgg@...pe.ca> wrote: -----

>To: "Bernard Metzler" <BMT@...ich.ibm.com>
>From: "Jason Gunthorpe" <jgg@...pe.ca>
>Date: 07/12/2019 04:43PM
>Cc: "Arnd Bergmann" <arnd@...db.de>, "Doug Ledford"
><dledford@...hat.com>, "Peter Zijlstra" <peterz@...radead.org>,
>linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org
>Subject: [EXTERNAL] Re: Re: Re: [PATCH] rdma/siw: avoid
>smp_store_mb() on a u64
>
>On Fri, Jul 12, 2019 at 02:35:50PM +0000, Bernard Metzler wrote:
>
>> >This looks wrong to me.. a userspace notification re-arm cannot be
>> >lost, so have a split READ/TEST/WRITE sequence can't possibly
>work?
>> >
>> >I'd expect an atomic test and clear here?
>> 
>> We cannot avoid the case that the application re-arms the
>> CQ only after a CQE got placed. That is why folks are polling the
>> CQ once after re-arming it - to make sure they do not miss the
>> very last and single CQE which would have produced a CQ event.
>
>That is different, that is re-arm happing after a CQE placement and
>this can't be fixed.
>
>What I said is that a re-arm from userspace cannot be lost. So you
>can't blindly clear the arm flag with the WRITE_ONCE. It might be OK
>beacuse of the if, but...
>
>It is just goofy to write it without a 'test and clear' atomic. If
>the
>writer side consumes the notify it should always be done atomically.
>
Hmmm, I don't yet get why we should test and clear atomically, if we
clear anyway - is it because we want to avoid clearing a re-arm which
happens just after testing and before clearing?
(1) If the test was positive, we will call the CQ event handler,
and per RDMA verbs spec, the application MUST re-arm the CQ after it
got a CQ event, to get another one. So clearing it sometimes before
calling the handler is right.
(2) If the test was negative, a test and reset would not change
anything.

Another complication -- test_and_set_bit() operates on a single
bit, but we have to test two bits, and reset both, if one is
set. Can we do that atomically, if we test the bits conditionally?
I didn't find anything appropriate.

>And then I think all the weird barriers go away
>
>> >> @@ -1141,11 +1145,17 @@ int siw_req_notify_cq(struct ib_cq
>> >*base_cq, enum ib_cq_notify_flags flags)
>> >>  	siw_dbg_cq(cq, "flags: 0x%02x\n", flags);
>> >>  
>> >>  	if ((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED)
>> >> -		/* CQ event for next solicited completion */
>> >> -		smp_store_mb(*cq->notify, SIW_NOTIFY_SOLICITED);
>> >> +		/*
>> >> +		 * Enable CQ event for next solicited completion.
>> >> +		 * and make it visible to all associated producers.
>> >> +		 */
>> >> +		smp_store_mb(cq->notify->flags, SIW_NOTIFY_SOLICITED);
>> >
>> >But what is the 2nd piece of data to motivate the smp_store_mb?
>> 
>> Another core (such as a concurrent RX operation) shall see this
>> CQ being re-armed asap.
>
>'ASAP' is not a '2nd piece of data'. 
>
>AFAICT this requirement is just a normal atomic set_bit which does
>also expedite making the change visible?

Absolutely!!
good point....this is just a single flag we are operating on.
And the weird barrier goes away ;)

Many thanks!
Bernard.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ