[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87v8gfo9rg.wl-maz@kernel.org>
Date: Fri, 26 May 2023 08:02:43 +0100
From: Marc Zyngier <maz@...nel.org>
To: wangwudi <wangwudi@...ilicon.com>
Cc: <linux-kernel@...r.kernel.org>
Subject: Re: [Question about gic vmovp cmd] Consider adding VINVALL after VMOVP
On Fri, 26 May 2023 07:04:34 +0100,
wangwudi <wangwudi@...ilicon.com> wrote:
>
> Hi Marc,
>
> During vpe migration, VMOVP needs to be executed.
> If the vpe is migrated for the first time, especially before it is
> scheduled for the first time, there may be some unusual hanppens
> over kexec.
What may happen?
> We might consider adding a VINVALL cmd after VMOVP to
> increase robustness.
What are you trying to guarantee by adding this? From a performance
perspective, this is terrible as you're forcing the ITS to drop its
caches and reload everything, making the interrupt latency far worse
than what it should be on each and every vcpu migration.
We already issue a VINVALL when a VPE is mapped. Why would you need
anything else?
>
> @@ -1327,6 +1327,7 @@ static void its_send_vmovp(struct its_vpe *vpe)
>
> desc.its_vmovp_cmd.col = &its->collections[col_id];
> its_send_single_vcommand(its, its_build_vmovp_cmd, &desc);
> + its_send_vinvall(its, vpe);
> }
>
> Do you think it's all right?
I think this is pretty bad. If your HW requires this, then we can add
it as a workaround for your particular platform, but in general, this
is not needed.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists