lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM5PR2101MB104720061795F26D892D5373D7CB0@DM5PR2101MB1047.namprd21.prod.outlook.com>
Date:   Mon, 30 Mar 2020 19:49:00 +0000
From:   Michael Kelley <mikelley@...rosoft.com>
To:     Andrea Parri <parri.andrea@...il.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        Stephen Hemminger <sthemmin@...rosoft.com>,
        Wei Liu <wei.liu@...nel.org>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        Dexuan Cui <decui@...rosoft.com>,
        Boqun Feng <boqun.feng@...il.com>,
        vkuznets <vkuznets@...hat.com>,
        "James E.J. Bottomley" <jejb@...ux.ibm.com>,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>
Subject: RE: [RFC PATCH 11/11] scsi: storvsc: Re-init stor_chns when a channel
 interrupt is re-assigned

From: Andrea Parri <parri.andrea@...il.com> Sent: Monday, March 30, 2020 11:55 AM
> 
> > > @@ -1721,6 +1721,10 @@ static ssize_t target_cpu_store(struct vmbus_channel
> *channel,
> > >  	 * in on a CPU that is different from the channel target_cpu value.
> > >  	 */
> > >
> > > +	if (channel->change_target_cpu_callback)
> > > +		(*channel->change_target_cpu_callback)(channel,
> > > +				channel->target_cpu, target_cpu);
> > > +
> > >  	channel->target_cpu = target_cpu;
> > >  	channel->target_vp = hv_cpu_number_to_vp_number(target_cpu);
> > >  	channel->numa_node = cpu_to_node(target_cpu);
> >
> > I think there's an ordering problem here.  The change_target_cpu_callback
> > will allow storvsc to flush the cache that it is keeping, but there's a window
> > after the storvsc callback releases the spin lock and before this function
> > changes channel->target_cpu to the new value.  In that window, the cache
> > could get refilled based on the old value of channel->target_cpu, which is
> > exactly what we don't want.  Generally with caches, you have to set the new
> > value first, then flush the cache, and I think that works in this case.  The
> > callback function doesn't depend on the value of channel->target_cpu,
> > and any cache filling that might happen after channel->target_cpu is set
> > to the new value but before the callback function runs is OK.   But please
> > double-check my thinking. :-)
> 
> Sorry, I don't see the problem.  AFAICT, the "cache" gets refilled based
> on the values of alloced_cpus and on the current state of the cache but
> not based on the value of channel->target_cpu.  The callback invocation
> uses the value of the "old" target_cpu; I think I ended up placing the
> callback call where it is for not having to introduce a local variable
> "old_cpu".  ;-)
>

You are right.   My comment is bogus.

> 
> > > @@ -621,6 +621,63 @@ static inline struct storvsc_device *get_in_stor_device(
> > >
> > >  }
> > >
> > > +void storvsc_change_target_cpu(struct vmbus_channel *channel, u32 old, u32 new)
> > > +{
> > > +	struct storvsc_device *stor_device;
> > > +	struct vmbus_channel *cur_chn;
> > > +	bool old_is_alloced = false;
> > > +	struct hv_device *device;
> > > +	unsigned long flags;
> > > +	int cpu;
> > > +
> > > +	device = channel->primary_channel ?
> > > +			channel->primary_channel->device_obj
> > > +				: channel->device_obj;
> > > +	stor_device = get_out_stor_device(device);
> > > +	if (!stor_device)
> > > +		return;
> > > +
> > > +	/* See storvsc_do_io() -> get_og_chn(). */
> > > +	spin_lock_irqsave(&device->channel->lock, flags);
> > > +
> > > +	/*
> > > +	 * Determines if the storvsc device has other channels assigned to
> > > +	 * the "old" CPU to update the alloced_cpus mask and the stor_chns
> > > +	 * array.
> > > +	 */
> > > +	if (device->channel != channel && device->channel->target_cpu == old) {
> > > +		cur_chn = device->channel;
> > > +		old_is_alloced = true;
> > > +		goto old_is_alloced;
> > > +	}
> > > +	list_for_each_entry(cur_chn, &device->channel->sc_list, sc_list) {
> > > +		if (cur_chn == channel)
> > > +			continue;
> > > +		if (cur_chn->target_cpu == old) {
> > > +			old_is_alloced = true;
> > > +			goto old_is_alloced;
> > > +		}
> > > +	}
> > > +
> > > +old_is_alloced:
> > > +	if (old_is_alloced)
> > > +		WRITE_ONCE(stor_device->stor_chns[old], cur_chn);
> > > +	else
> > > +		cpumask_clear_cpu(old, &stor_device->alloced_cpus);
> >
> > I think target_cpu_store() can get called in parallel on multiple CPUs for different
> > channels on the same storvsc device, but multiple changes to a single channel are
> > serialized by higher levels of sysfs.  So this function could run after multiple
> > channels have been changed, in which case there's not just a single "old" value,
> > and the above algorithm might not work, especially if channel->target_cpu is
> > updated before calling this function per my earlier comment.   I can see a
> > couple of possible ways to deal with this.  One is to put the update of
> > channel->target_cpu in this function, within the spin lock boundaries so
> > that the cache flush and target_cpu update are atomic.  Another idea is to
> > process multiple changes in this function, by building a temp copy of
> > alloced_cpus by walking the channel list, use XOR to create a cpumask
> > with changes, and then process all the changes in a loop instead of
> > just handling a single change as with the current code at the old_is_alloced
> > label.  But I haven't completely thought through this idea.
> 
> Same here: the invocations of target_cpu_store() are serialized on the
> per-connection channel_mutex...

Agreed.  My comment is not valid.

> 
> 
> > > @@ -1268,8 +1330,10 @@ static struct vmbus_channel *get_og_chn(struct
> storvsc_device
> > > *stor_device,
> > >  		if (cpumask_test_cpu(tgt_cpu, node_mask))
> > >  			num_channels++;
> > >  	}
> > > -	if (num_channels == 0)
> > > +	if (num_channels == 0) {
> > > +		stor_device->stor_chns[q_num] = stor_device->device->channel;
> >
> > Is the above added line just fixing a bug in the existing code?  I'm not seeing how
> > it would derive from the other changes in this patch.
> 
> It was rather intended as an optimization:  Each time I/O for a device
> is initiated on a CPU that have "num_channels == 0" channel, the current
> code ends up calling get_og_chn() (in the attempt to fill the cache) and
> returns the device's primary channel.  In the current code, the cost of
> this operations is basically the cost of parsing alloced_cpus, but with
> the changes introduced here this also involves acquiring (and releasing)
> the primary channel's lock.  I should probably put my hands forward and
> say that I haven't observed any measurable effects due this addition in
> my experiments; OTOH, caching the returned/"found" value made sense...

OK.  That's what I thought.  The existing code does not produce an incorrect
result, but the cache isn't working as intended.  This fixes it.

> 
> 
> > > @@ -1324,7 +1390,10 @@ static int storvsc_do_io(struct hv_device *device,
> > >  					continue;
> > >  				if (tgt_cpu == q_num)
> > >  					continue;
> > > -				channel = stor_device->stor_chns[tgt_cpu];
> > > +				channel = READ_ONCE(
> > > +					stor_device->stor_chns[tgt_cpu]);
> > > +				if (channel == NULL)
> > > +					continue;
> >
> > The channel == NULL case is new because a cache flush could be happening
> > in parallel on another CPU.  I'm wondering about the tradeoffs of
> > continuing in the loop (as you have coded in this patch) vs. a "goto" back to
> > the top level "if" statement.   With the "continue" you might finish the
> > loop without finding any matches, and fall through to the next approach.
> > But it's only a single I/O operation, and if it comes up with a less than
> > optimal channel choice, it's no big deal.  So I guess it's really a wash.
> 
> Yes, I considered both approaches; they both "worked" here.  I was a
> bit concerned about the number of "possible" gotos (again, mainly a
> theoretical issue, since I can imagine that the cash flushes will be
> relatively "rare" events in most cases and, in any case, they happen
> to be serialized); the "continue" looked like a suitable and simpler
> approach/compromise, at least for the time being.

Yes, I'm OK with your patch "as is".  I was just thinking about the
alternative, and evidently you did too.

> 
> 
> >
> > >  				if (hv_get_avail_to_write_percent(
> > >  							&channel->outbound)
> > >  						> ring_avail_percent_lowater) {
> > > @@ -1350,7 +1419,10 @@ static int storvsc_do_io(struct hv_device *device,
> > >  			for_each_cpu(tgt_cpu, &stor_device->alloced_cpus) {
> > >  				if (cpumask_test_cpu(tgt_cpu, node_mask))
> > >  					continue;
> > > -				channel = stor_device->stor_chns[tgt_cpu];
> > > +				channel = READ_ONCE(
> > > +					stor_device->stor_chns[tgt_cpu]);
> > > +				if (channel == NULL)
> > > +					continue;
> >
> > Same comment here.
> 
> Similarly here.

Agreed.

> 
> Thoughts?
> 
> Thanks,
>   Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ