[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <553E47C6.8040200@oracle.com>
Date: Mon, 27 Apr 2015 10:29:26 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: David Vrabel <david.vrabel@...rix.com>, konrad.wilk@...cle.com
CC: xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen: Suspend ticks on all CPUs during suspend
On 04/27/2015 06:33 AM, David Vrabel wrote:
> On 08/04/15 19:53, Boris Ostrovsky wrote:
>> Commit 77e32c89a711 ("clockevents: Manage device's state separately for
>> the core") decouples clockevent device's modes from states. With this
>> change when a Xen guest tries to resume, it won't be calling its
>> set_mode op which needs to be done on each VCPU in order to make the
>> hypervisor aware that we are in oneshot mode.
>>
>> This happens because clockevents_tick_resume() (which is an intermediate
>> step of resuming ticks on a processor) no longer calls clockevents_set_state()
>> and because during suspend clockevent devices on all VCPUs (except for the
>> one doing the suspend) are left in ONESHOT state. As result, during resume
>> the clockevents state machine will assume that device is already where it
>> should be and doesn't need to be updated.
>>
>> To avoid this problem we should suspend ticks on all VCPUs during
>> suspend.
> Sorry for the delay in reviewing this.
>
>> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
>> index bf19407..2fd9fe8 100644
>> --- a/drivers/xen/manage.c
>> +++ b/drivers/xen/manage.c
>> @@ -131,6 +131,8 @@ static void do_suspend(void)
>> goto out_resume;
>> }
>>
>> + xen_arch_suspend();
>> +
>> si.cancelled = 1;
> xen_arch_resume() is only called when !si.cancelled but you call
> xen_arch_suspend() unconditionally.
Good point. Let me see if I can move this to xen_arch_post_suspend, when
we know whether the suspend has been canceled.
-boris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists