[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d40ab37e-ae46-dce8-21f9-b62c062cab84@amd.com>
Date: Tue, 22 Jan 2019 21:43:39 +0000
From: "Lendacky, Thomas" <Thomas.Lendacky@....com>
To: Thiago Jung Bauermann <bauerman@...ux.ibm.com>,
"x86@...nel.org" <x86@...nel.org>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, Ram Pai <linuxram@...ibm.com>
Subject: Re: [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes number of
pages
On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@...ibm.com>
>
> set_memory_decrypted() expects the number of PAGE_SIZE pages to decrypt.
> kvmclock_init_mem() instead passes number of bytes. This decrypts a huge
> number of pages resulting in data corruption.
Same comment as patch 1/2 in this series. This is not correct. See
comments below.
>
> Fixed it.
>
> [ bauermann: Slightly reworded commit message and added Fixes: tag. ]
> Fixes: 6a1cac56f41f ("x86/kvm: Use __bss_decrypted attribute in shared variables")
> Signed-off-by: Ram Pai <linuxram@...ibm.com>
> Signed-off-by: Thiago Jung Bauermann <bauerman@...ux.ibm.com>
> ---
> arch/x86/kernel/kvmclock.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> Note: Found by code inspection. I don't have a way to test.
>
> diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> index e811d4d1c824..b5c867dd2c8d 100644
> --- a/arch/x86/kernel/kvmclock.c
> +++ b/arch/x86/kernel/kvmclock.c
> @@ -251,8 +251,7 @@ static void __init kvmclock_init_mem(void)
> * be mapped decrypted.
> */
> if (sev_active()) {
> - r = set_memory_decrypted((unsigned long) hvclock_mem,
> - 1UL << order);
> + r = set_memory_decrypted((unsigned long) hvclock_mem, 1);
Again, not correct. A number of pages were allocated based on the order.
That number is calculated using the shift in the call. Hardcoding this to
1 is wrong.
Thanks,
Tom
> if (r) {
> __free_pages(p, order);
> hvclock_mem = NULL;
>
Powered by blists - more mailing lists