[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d9d3372-825a-417a-8811-ffa501c83936@linux.microsoft.com>
Date: Thu, 1 Feb 2024 13:02:38 +0100
From: Jeremi Piotrowski <jpiotrowski@...ux.microsoft.com>
To: Vishal Annapurve <vannapurve@...gle.com>,
Dave Hansen <dave.hansen@...el.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, pbonzini@...hat.com,
rientjes@...gle.com, seanjc@...gle.com, erdemaktas@...gle.com,
ackerleytng@...gle.com, jxgao@...gle.com, sagis@...gle.com,
oupton@...gle.com, peterx@...hat.com, vkuznets@...hat.com,
dmatlack@...gle.com, pgonda@...gle.com, michael.roth@....com,
kirill@...temov.name, thomas.lendacky@....com, dave.hansen@...ux.intel.com,
linux-coco@...ts.linux.dev, chao.p.peng@...ux.intel.com,
isaku.yamahata@...il.com, andrew.jones@...ux.dev, corbet@....net,
hch@....de, m.szyprowski@...sung.com, rostedt@...dmis.org,
iommu@...ts.linux.dev
Subject: Re: [RFC V1 5/5] x86: CVMs: Ensure that memory conversions happen at
2M alignment
On 01/02/2024 04:46, Vishal Annapurve wrote:
> On Wed, Jan 31, 2024 at 10:03 PM Dave Hansen <dave.hansen@...el.com> wrote:
>>
>> On 1/11/24 21:52, Vishal Annapurve wrote:
>>> @@ -2133,8 +2133,10 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
>>> int ret;
>>>
>>> /* Should not be working on unaligned addresses */
>>> - if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr))
>>> - addr &= PAGE_MASK;
>>> + if (WARN_ONCE(addr & ~HPAGE_MASK, "misaligned address: %#lx\n", addr)
>>> + || WARN_ONCE((numpages << PAGE_SHIFT) & ~HPAGE_MASK,
>>> + "misaligned numpages: %#lx\n", numpages))
>>> + return -EINVAL;
>>
>> This series is talking about swiotlb and DMA, then this applies a
>> restriction to what I *thought* was a much more generic function:
>> __set_memory_enc_pgtable(). What prevents this function from getting
>> used on 4k mappings?
>>
>>
>
> The end goal here is to limit the conversion granularity to hugepage
> sizes. SWIOTLB allocations are the major source of unaligned
> allocations(and so the conversions) that need to be fixed before
> achieving this goal.
>
> This change will ensure that conversion fails for unaligned ranges, as
> I don't foresee the need for 4K aligned conversions apart from DMA
> allocations.
Hi Vishal,
This assumption is wrong. set_memory_decrypted is called from various
parts of the kernel: kexec, sev-guest, kvmclock, hyperv code. These conversions
are for non-DMA allocations that need to be done at 4KB granularity
because the data structures in question are page sized.
Thanks,
Jeremi
Powered by blists - more mailing lists