lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN2PR03MB214246D9A9DAFA36E392AE78A08C0@SN2PR03MB2142.namprd03.prod.outlook.com>
Date:	Fri, 18 Mar 2016 18:02:53 +0000
From:	KY Srinivasan <kys@...rosoft.com>
To:	Vitaly Kuznetsov <vkuznets@...hat.com>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Haiyang Zhang" <haiyangz@...rosoft.com>,
	"Alex Ng (LIS)" <alexng@...rosoft.com>,
	"Radim Krcmar" <rkrcmar@...hat.com>,
	Cathy Avery <cavery@...hat.com>
Subject: RE: [PATCH] Drivers: hv: vmbus: handle various crash scenarios



> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@...hat.com]
> Sent: Friday, March 18, 2016 5:33 AM
> To: devel@...uxdriverproject.org
> Cc: linux-kernel@...r.kernel.org; KY Srinivasan <kys@...rosoft.com>;
> Haiyang Zhang <haiyangz@...rosoft.com>; Alex Ng (LIS)
> <alexng@...rosoft.com>; Radim Krcmar <rkrcmar@...hat.com>; Cathy
> Avery <cavery@...hat.com>
> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> 
> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
> delivered to CPU0 regardless of what CPU we're sending
> CHANNELMSG_UNLOAD
> from. vmbus_wait_for_unload() doesn't account for the fact that in case
> we're crashing on some other CPU and CPU0 is still alive and operational
> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
> vmbus_connection.unload_event, our wait on the current CPU will never
> end.

What was the host you were testing on?

K. Y
> 
> Do the following:
> 1) Check for completion_done() in the loop. In case interrupt handler is
>    still alive we'll get the confirmation we need.
> 
> 2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE
> will be
>    delivered there. We can race with still-alive interrupt handler doing
>    the same but we don't care as we're checking completion_done() now.
> 
> 3) Cleanup message pages on all CPUs. This is required (at least for the
>    current CPU as we're clearing CPU0 messages now but we may want to
> bring
>    up additional CPUs on crash) as new messages won't be delivered till we
>    consume what's pending. On boot we'll place message pages somewhere
> else
>    and we won't be able to read stale messages.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
>  1 file changed, 25 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index b10e8f74..5f37057 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel
> *channel, const uuid_le *type_gui
> 
>  static void vmbus_wait_for_unload(void)
>  {
> -	int cpu = smp_processor_id();
> -	void *page_addr = hv_context.synic_message_page[cpu];
> +	int cpu;
> +	void *page_addr = hv_context.synic_message_page[0];
>  	struct hv_message *msg = (struct hv_message *)page_addr +
>  				  VMBUS_MESSAGE_SINT;
>  	struct vmbus_channel_message_header *hdr;
>  	bool unloaded = false;
> 
> -	while (1) {
> +	/*
> +	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0.
> When we're
> +	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
> +	 * still functional and vmbus_unload_response() will complete
> +	 * vmbus_connection.unload_event. If not, the last thing we can do
> is
> +	 * read message page for CPU0 regardless of what CPU we're on.
> +	 */
> +	while (!unloaded) {
> +		if (completion_done(&vmbus_connection.unload_event)) {
> +			unloaded = true;
> +			break;
> +		}
> +
>  		if (READ_ONCE(msg->header.message_type) ==
> HVMSG_NONE) {
>  			mdelay(10);
>  			continue;
> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
>  			unloaded = true;
> 
>  		vmbus_signal_eom(msg);
> +	}
> 
> -		if (unloaded)
> -			break;
> +	/*
> +	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup
> all
> +	 * maybe-pending messages on all CPUs to be able to receive new
> +	 * messages after we reconnect.
> +	 */
> +	for_each_online_cpu(cpu) {
> +		page_addr = hv_context.synic_message_page[cpu];
> +		msg = (struct hv_message *)page_addr +
> VMBUS_MESSAGE_SINT;
> +		msg->header.message_type = HVMSG_NONE;
>  	}
>  }
> 
> --
> 2.5.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ