[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231031114316.0bfa8d91@meshulam.tesarici.cz>
Date: Tue, 31 Oct 2023 11:43:16 +0100
From: Petr Tesařík <petr@...arici.cz>
To: Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, kirill.shutemov@...ux.intel.com,
elena.reshetova@...el.com, isaku.yamahata@...el.com,
seanjc@...gle.com, Michael Kelley <mikelley@...rosoft.com>,
thomas.lendacky@....com, decui@...rosoft.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>, iommu@...ts.linux.dev
Subject: Re: [PATCH 04/10] swiotlb: Use free_decrypted_pages()
On Tue, 17 Oct 2023 13:24:59 -0700
Rick Edgecombe <rick.p.edgecombe@...el.com> wrote:
> On TDX it is possible for the untrusted host to cause
> set_memory_encrypted() or set_memory_decrypted() to fail such that an
> error is returned and the resulting memory is shared. Callers need to take
> care to handle these errors to avoid returning decrypted (shared) memory to
> the page allocator, which could lead to functional or security issues.
>
> Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails.
> Use the recently added free_decrypted_pages() to avoid this.
>
> In swiotlb_exit(), check for set_memory_encrypted() errors manually,
> because the pages are not nessarily going to the page allocator.
>
> Cc: Christoph Hellwig <hch@....de>
> Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> Cc: Robin Murphy <robin.murphy@....com>
> Cc: iommu@...ts.linux.dev
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
> ---
> kernel/dma/swiotlb.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 394494a6b1f3..ad06786c4f98 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -524,6 +524,7 @@ void __init swiotlb_exit(void)
> unsigned long tbl_vaddr;
> size_t tbl_size, slots_size;
> unsigned int area_order;
> + int ret;
>
> if (swiotlb_force_bounce)
> return;
> @@ -536,17 +537,19 @@ void __init swiotlb_exit(void)
> tbl_size = PAGE_ALIGN(mem->end - mem->start);
> slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
>
> - set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> + ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> if (mem->late_alloc) {
> area_order = get_order(array_size(sizeof(*mem->areas),
> mem->nareas));
> free_pages((unsigned long)mem->areas, area_order);
> - free_pages(tbl_vaddr, get_order(tbl_size));
> + if (!ret)
> + free_pages(tbl_vaddr, get_order(tbl_size));
> free_pages((unsigned long)mem->slots, get_order(slots_size));
> } else {
> memblock_free_late(__pa(mem->areas),
> array_size(sizeof(*mem->areas), mem->nareas));
> - memblock_free_late(mem->start, tbl_size);
> + if (!ret)
> + memblock_free_late(mem->start, tbl_size);
> memblock_free_late(__pa(mem->slots), slots_size);
> }
>
> @@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
> return page;
>
> error:
> - __free_pages(page, order);
> + free_decrypted_pages((unsigned long)vaddr, order);
> return NULL;
> }
I admit I'm not familiar with the encryption/decryption API, but if a
__free_pages() is not sufficient here, then it is quite confusing.
The error label is reached only if set_memory_decrypted() returns
non-zero. My naive expectation is that the memory is *not* decrypted in
that case and does not require special treatment. Is this assumption
wrong?
OTOH I believe there is a bug in the logic. The subsequent
__free_pages() in swiotlb_alloc_tlb() would have to be changed to a
free_decrypted_pages(). However, I'm proposing a different approach to
address the latter issue here:
https://lore.kernel.org/linux-iommu/20231026095123.222-1-petrtesarik@huaweicloud.com/T/
Petr T
Powered by blists - more mailing lists