lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <091d0fd6321f4dd490e61a574d5b5b50@SIXPR30MB031.064d.mgd.msft.net>
Date:	Fri, 24 Apr 2015 08:40:40 +0000
From:	Dexuan Cui <decui@...rosoft.com>
To:	Vitaly Kuznetsov <vkuznets@...hat.com>,
	KY Srinivasan <kys@...rosoft.com>
CC:	Haiyang Zhang <haiyangz@...rosoft.com>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 5/6] Drivers: hv: vmbus: distribute subchannels among all
 vcpus

> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@...hat.com]
> Sent: Tuesday, April 21, 2015 22:28
> To: KY Srinivasan
> Cc: Haiyang Zhang; devel@...uxdriverproject.org; linux-
> kernel@...r.kernel.org; Dexuan Cui
> Subject: [PATCH 5/6] Drivers: hv: vmbus: distribute subchannels among all
> vcpus
>
> Primary channels are distributed evenly across all vcpus we have. When the
> host asks us to create subchannels it usually makes us num_cpus-1 offers

Hi Vitaly,
AFAIK, in the VSP of storvsc, the number of subchannel is
 (the_number_of_vcpus - 1) / 4.

This means for a 8-vCPU guest, there is only 1 subchannel.

Your new algorithm tends to make the vCPUs with small-number busier:
e.g., in the 8-vCPU case, assuming we have 4 SCSI controllers:
vCPU0: scsi0's PrimaryChannel (P)
vCPU1: scsi0's SubChannel (S) + scsi1's P
vCPU2: scsi1's S + scsi2's P
vCPU3: scsi2's S + scsi3's P
vCPU4: scsi3's S
vCPU5, 6 and 7 are idle.

In this special case, the existing algorithm is better. :-)

However, I do like this idea in your patch, that is, making sure a device's
primary/sub channels are assigned to differents vCPUs.

I'm just wondering if we should use an even better (and complex) algorithm :-)

PS, yeah, for netvsc(HV_NIC_GUID), the number of SC is indeed
the_number_vcpus -1. I'm not sure about the upcoming HV_ND_GUID --
maybe it's the same as HV_NIC_GUID.

Thanks,
-- Dexuan

> and we are supposed to distribute the work evenly among the channel
>  itself and all its  subchannels. Make sure they are all assigned to
>  different vcpus.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  drivers/hv/channel_mgmt.c | 29 ++++++++++++++++++++++++++++-
>  1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index 8f2761f..daa6417 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -270,6 +270,8 @@ static void init_vp_index(struct vmbus_channel
> *channel,
>       int i;
>       bool perf_chn = false;
>       u32 max_cpus = num_online_cpus();
> +     struct vmbus_channel *primary = channel->primary_channel, *prev;
> +     unsigned long flags;
>
>       for (i = IDE; i < MAX_PERF_CHN; i++) {
>               if (!memcmp(type_guid->b, hp_devs[i].guid,
> @@ -290,7 +292,32 @@ static void init_vp_index(struct vmbus_channel
> *channel,
>               channel->target_vp = 0;
>               return;
>       }
> -     cur_cpu = (++next_vp % max_cpus);
> +
> +     /*
> +      * Primary channels are distributed evenly across all vcpus we have.
> +      * When the host asks us to create subchannels it usually makes us
> +      * num_cpus-1 offers and we are supposed to distribute the work
> evenly
> +      * among the channel itself and all its subchannels. Make sure they
> are
> +      * all assigned to different vcpus.
> +      */
> +     if (!primary)
> +             cur_cpu = (++next_vp % max_cpus);
> +     else {
> +             /*
> +              * Let's assign the first subchannel of a channel to the
> +              * primary->target_cpu+1 and all the subsequent channels
> to
> +              * the prev->target_cpu+1.
> +              */
> +             spin_lock_irqsave(&primary->lock, flags);
> +             if (primary->num_sc == 1)
> +                     cur_cpu = (primary->target_cpu + 1) % max_cpus;
> +             else {
> +                     prev = list_prev_entry(channel, sc_list);
> +                     cur_cpu = (prev->target_cpu + 1) % max_cpus;
> +             }
> +             spin_unlock_irqrestore(&primary->lock, flags);
> +     }
> +
>       channel->target_cpu = cur_cpu;
>       channel->target_vp = hv_context.vp_index[cur_cpu];
>  }
> --
> 1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ