lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sghv3u4a.fsf@vitty.brq.redhat.com>
Date:   Thu, 26 Mar 2020 18:26:29 +0100
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Andrea Parri <parri.andrea@...il.com>
Cc:     Dexuan Cui <decui@...rosoft.com>,
        "K . Y . Srinivasan" <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        Wei Liu <wei.liu@...nel.org>, linux-hyperv@...r.kernel.org,
        Michael Kelley <mikelley@...rosoft.com>,
        Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 02/11] Drivers: hv: vmbus: Don't bind the offer&rescind works to a specific CPU

Andrea Parri <parri.andrea@...il.com> writes:

> On Thu, Mar 26, 2020 at 03:16:21PM +0100, Vitaly Kuznetsov wrote:
>> "Andrea Parri (Microsoft)" <parri.andrea@...il.com> writes:
>> 
>> > The offer and rescind works are currently scheduled on the so called
>> > "connect CPU".  However, this is not really needed: we can synchronize
>> > the works by relying on the usage of the offer_in_progress counter and
>> > of the channel_mutex mutex.  This synchronization is already in place.
>> > So, remove this unnecessary "bind to the connect CPU" constraint and
>> > update the inline comments accordingly.
>> >
>> > Suggested-by: Dexuan Cui <decui@...rosoft.com>
>> > Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@...il.com>
>> > ---
>> >  drivers/hv/channel_mgmt.c | 21 ++++++++++++++++-----
>> >  drivers/hv/vmbus_drv.c    | 39 ++++++++++++++++++++++++++++-----------
>> >  2 files changed, 44 insertions(+), 16 deletions(-)
>> >
>> > diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
>> > index 0370364169c4e..1191f3d76d111 100644
>> > --- a/drivers/hv/channel_mgmt.c
>> > +++ b/drivers/hv/channel_mgmt.c
>> > @@ -1025,11 +1025,22 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
>> >  	 * offer comes in first and then the rescind.
>> >  	 * Since we process these events in work elements,
>> >  	 * and with preemption, we may end up processing
>> > -	 * the events out of order. Given that we handle these
>> > -	 * work elements on the same CPU, this is possible only
>> > -	 * in the case of preemption. In any case wait here
>> > -	 * until the offer processing has moved beyond the
>> > -	 * point where the channel is discoverable.
>> > +	 * the events out of order.  We rely on the synchronization
>> > +	 * provided by offer_in_progress and by channel_mutex for
>> > +	 * ordering these events:
>> > +	 *
>> > +	 * { Initially: offer_in_progress = 1 }
>> > +	 *
>> > +	 * CPU1				CPU2
>> > +	 *
>> > +	 * [vmbus_process_offer()]	[vmbus_onoffer_rescind()]
>> > +	 *
>> > +	 * LOCK channel_mutex		WAIT_ON offer_in_progress == 0
>> > +	 * DECREMENT offer_in_progress	LOCK channel_mutex
>> > +	 * INSERT chn_list		SEARCH chn_list
>> > +	 * UNLOCK channel_mutex		UNLOCK channel_mutex
>> > +	 *
>> > +	 * Forbids: CPU2's SEARCH from *not* seeing CPU1's INSERT
>> 
>> WAIT_ON offer_in_progress == 0
>> LOCK channel_mutex
>> 
>> seems to be racy: what happens if offer_in_progress increments after we
>> read it but before we managed to aquire channel_mutex?
>
> Remark that the RESCIND work must see the increment which is performed
> "before" queueing the work in question (and the associated OFFER work),
> cf. the comment in vmbus_on_msg_dpc() below and
>
>   dbb92f88648d6 ("workqueue: Document (some) memory-ordering properties of {queue,schedule}_work()")
>
> AFAICT, this suffices to meet the intended behavior as sketched above.
> I might be missing something of course, can you elaborate on the issue
> here?
>

In case we believe that OFFER -> RESCINF sequence is always ordered
by the host AND we don't care about other offers in the queue the
suggested locking is OK: we're guaranteed to process RESCIND after we
finished processing OFFER for the same channel. However, waiting for
'offer_in_progress == 0' looks fishy so I'd suggest we at least add a
comment explaining that the wait is only needed to serialize us with
possible OFFER for the same channel - and nothing else. I'd personally
still slightly prefer the algorythm I suggested as it guarantees we take
channel_mutex with offer_in_progress == 0 -- even if there are no issues
we can think of today (not strongly though).

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ