lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220803043709.GA26795@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
Date:   Tue, 2 Aug 2022 21:37:09 -0700
From:   Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
To:     Praveen Kumar <kumarpraveen@...ux.microsoft.com>
Cc:     kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
        wei.liu@...nel.org, decui@...rosoft.com, jejb@...ux.ibm.com,
        martin.petersen@...cle.com, linux-hyperv@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org
Subject: Re: [PATCH] Drivers: hv: vmbus: Optimize vmbus_on_event

Thanks for your review, please find my comment inline.

On Tue, Aug 02, 2022 at 01:44:23PM +0530, Praveen Kumar wrote:
> On 25-07-2022 15:07, Saurabh Sengar wrote:
> > In the vmbus_on_event loop, 2 jiffies timer will not serve the purpose if
> > callback_fn takes longer. For effective use move this check inside of
> > callback functions where needed. Out of all the VMbus drivers using
> > vmbus_on_event, only storvsc has a high packet volume, thus add this limit
> > only in storvsc callback for now.
> > There is no apparent benefit of loop itself because this tasklet will be
> > scheduled anyway again if there are packets left in ring buffer. This
> > patch removes this unnecessary loop as well.
> > 
> 
> In my understanding the loop was for optimizing the host to guest signaling for batched channels.
> And the loop ensures that we process all the posted messages from the host before returning from the respective callbacks.
> 
> Am I missing something here.

Out of all the drivers using vmbus_on_event, only storvsc have high packet volume.
The callback for storvsc is storvsc_on_channel_callback function which anyway has
loop to check if there are any completion packets left. After this change when we
move timeout inside storvsc callback, there is a possibility it comes back from
callback leaving packets in ring buffer, for such cases the tasklet will be rescheduled.
This function handles single ring buffer per call there is no batching.

- Saurabh
> 
> > Signed-off-by: Saurabh Sengar <ssengar@...ux.microsoft.com>
> > ---
> >  drivers/hv/connection.c    | 33 ++++++++++++++-------------------
> >  drivers/scsi/storvsc_drv.c |  9 +++++++++
> >  2 files changed, 23 insertions(+), 19 deletions(-)
> > 
> > diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
> > index eca7afd..9dc27e5 100644
> > --- a/drivers/hv/connection.c
> > +++ b/drivers/hv/connection.c
> > @@ -431,34 +431,29 @@ struct vmbus_channel *relid2channel(u32 relid)
> >  void vmbus_on_event(unsigned long data)
> >  {
> >  	struct vmbus_channel *channel = (void *) data;
> > -	unsigned long time_limit = jiffies + 2;
> > +	void (*callback_fn)(void *context);
> >  
> >  	trace_vmbus_on_event(channel);
> >  
> >  	hv_debug_delay_test(channel, INTERRUPT_DELAY);
> > -	do {
> > -		void (*callback_fn)(void *);
> >  
> > -		/* A channel once created is persistent even when
> > -		 * there is no driver handling the device. An
> > -		 * unloading driver sets the onchannel_callback to NULL.
> > -		 */
> > -		callback_fn = READ_ONCE(channel->onchannel_callback);
> > -		if (unlikely(callback_fn == NULL))
> > -			return;
> > -
> > -		(*callback_fn)(channel->channel_callback_context);
> > +	/* A channel once created is persistent even when
> > +	 * there is no driver handling the device. An
> > +	 * unloading driver sets the onchannel_callback to NULL.
> > +	 */
> > +	callback_fn = READ_ONCE(channel->onchannel_callback);
> > +	if (unlikely(!callback_fn))
> > +		return;
> >  
> > -		if (channel->callback_mode != HV_CALL_BATCHED)
> > -			return;
> > +	(*callback_fn)(channel->channel_callback_context);
> >  
> > -		if (likely(hv_end_read(&channel->inbound) == 0))
> > -			return;
> > +	if (channel->callback_mode != HV_CALL_BATCHED)
> > +		return;
> >  
> > -		hv_begin_read(&channel->inbound);
> > -	} while (likely(time_before(jiffies, time_limit)));
> > +	if (likely(hv_end_read(&channel->inbound) == 0))
> > +		return;
> >  
> > -	/* The time limit (2 jiffies) has been reached */
> > +	hv_begin_read(&channel->inbound);
> >  	tasklet_schedule(&channel->callback_event);
> >  }
> >  
> > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > index fe000da..c457e6b 100644
> > --- a/drivers/scsi/storvsc_drv.c
> > +++ b/drivers/scsi/storvsc_drv.c
> > @@ -60,6 +60,9 @@
> >  #define VMSTOR_PROTO_VERSION_WIN8_1	VMSTOR_PROTO_VERSION(6, 0)
> >  #define VMSTOR_PROTO_VERSION_WIN10	VMSTOR_PROTO_VERSION(6, 2)
> >  
> > +/* channel callback timeout in ms */
> > +#define CALLBACK_TIMEOUT               2
> > +
> >  /*  Packet structure describing virtual storage requests. */
> >  enum vstor_packet_operation {
> >  	VSTOR_OPERATION_COMPLETE_IO		= 1,
> > @@ -1204,6 +1207,7 @@ static void storvsc_on_channel_callback(void *context)
> >  	struct hv_device *device;
> >  	struct storvsc_device *stor_device;
> >  	struct Scsi_Host *shost;
> > +	unsigned long time_limit = jiffies + msecs_to_jiffies(CALLBACK_TIMEOUT);
> >  
> >  	if (channel->primary_channel != NULL)
> >  		device = channel->primary_channel->device_obj;
> > @@ -1224,6 +1228,11 @@ static void storvsc_on_channel_callback(void *context)
> >  		u32 minlen = rqst_id ? sizeof(struct vstor_packet) :
> >  			sizeof(enum vstor_packet_operation);
> >  
> > +		if (unlikely(time_after(jiffies, time_limit))) {
> > +			hv_pkt_iter_close(channel);
> > +			return;
> > +		}
> > +
> >  		if (pktlen < minlen) {
> >  			dev_err(&device->device,
> >  				"Invalid pkt: id=%llu, len=%u, minlen=%u\n",
> 
> Regards,
> 
> ~Praveen.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ