lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Jan 2019 09:11:03 -0800
From:   Stephen Hemminger <stephen@...workplumber.org>
To:     Dexuan Cui <decui@...rosoft.com>
Cc:     kimbrownkd <kimbrownkd@...il.com>,
        Michael Kelley <mikelley@...rosoft.com>,
        Long Li <longli@...rosoft.com>,
        Sasha Levin <Alexander.Levin@...rosoft.com>,
        "devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] Drivers: hv: vmbus: Expose counters for interrupts
 and full conditions



> +static ssize_t channel_intr_in_full_show(const struct vmbus_channel
> *channel,
> +					 char *buf)
> +{
> +	return sprintf(buf, "%llu\n", channel->intr_in_full);
> +}


intr_in_full is u64, which is not the same as unsigned long long.
to be correct you need a cast here.

> > diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> > index dcb6977afce9..7e5239123276 100644
> > --- a/include/linux/hyperv.h
> > +++ b/include/linux/hyperv.h
> > @@ -751,6 +751,27 @@ struct vmbus_channel {
> >  	u64	interrupts;	/* Host to Guest interrupts */
> >  	u64	sig_events;	/* Guest to Host events */
> > 
> > +	/* Interrupt counts for 2 types of Guest to Host interrupts */
> > +	u64 intr_in_full;	/* in ring buffer, full to not full */
> > +	u64 intr_out_empty;	/* out ring buffer, empty to not empty */
> > +
> > +	/*
> > +	 * The total number of write operations that encountered a full
> > +	 * outbound ring buffer.
> > +	 */
> > +	u64 out_full_total;
> > +	/*
> > +	 * The number of write operations that were the first to encounter a
> > +	 * full outbound ring buffer.
> > +	 */
> > +	u64 out_full_first;

Adding more fields changes cache layout which can cause
additional cache miss in the hot path.  

> > +	/*
> > +	 * Indicates that a full outbound ring buffer was encountered. The flag
> > +	 * is set to true when a full outbound ring buffer is encountered and
> > +	 * set to false when a write to the outbound ring buffer is completed.
> > +	 */
> > +	bool out_full_flag;

Discussion on kernel mailing list. Recommends against putting bool
in structures since that pads to full sizeof(int).  Could this be
part of a bitfield?

> >  	/* Channel callback's invoked in softirq context */
> >  	struct tasklet_struct callback_event;
> >  	void (*onchannel_callback)(void *context);
> > @@ -936,6 +957,23 @@ static inline void *get_per_channel_state(struct
> > vmbus_channel *c)
> >  static inline void set_channel_pending_send_size(struct vmbus_channel *c,
> >  						 u32 size)
> >  {
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&c->outbound.ring_lock, flags);
> > +
> > +	if (size) {
> > +		++c->out_full_total;
> > +
> > +		if (!c->out_full_flag) {
> > +			++c->out_full_first;
> > +			c->out_full_flag = true;
> > +		}
> > +	} else {
> > +		c->out_full_flag = false;
> > +	}
> > +
> > +	spin_unlock_irqrestore(&c->outbound.ring_lock, flags);

If this is called often, the additional locking will impact performance.

> >  	c->outbound.ring_buffer->pending_send_sz = size;
> >  }
> > 

Could I propose another alternative.

It might be more useful to count the guest to host interaction events
rather than the ring buffer.

For example the number of calls to:
	vmbus_set_event which means host exit call
	vmbus_setevent fastpath using sync_set_bit
	calls to rinbuffer_write that returned -EAGAIN

These would require less locking, reuse existing code paths
and not require additional state.

Powered by blists - more mailing lists