[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHUD1cklhydR-gE5@pc636>
Date: Mon, 14 Jul 2025 15:19:17 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: David Laight <david.laight.linux@...il.com>
Cc: Dave Hansen <dave.hansen@...el.com>, jacob.pan@...ux.microsoft.com,
Jason Gunthorpe <jgg@...dia.com>,
Lu Baolu <baolu.lu@...ux.intel.com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
Kevin Tian <kevin.tian@...el.com>, Jann Horn <jannh@...gle.com>,
Vasant Hegde <vasant.hegde@....com>,
Alistair Popple <apopple@...dia.com>,
Peter Zijlstra <peterz@...radead.org>,
Uladzislau Rezki <urezki@...il.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Andy Lutomirski <luto@...nel.org>, iommu@...ts.linux.dev,
security@...nel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH 1/1] iommu/sva: Invalidate KVA range on kernel TLB flush
On Mon, Jul 14, 2025 at 01:39:20PM +0100, David Laight wrote:
> On Wed, 9 Jul 2025 11:22:34 -0700
> Dave Hansen <dave.hansen@...el.com> wrote:
>
> > On 7/9/25 11:15, Jacob Pan wrote:
> > >>> Is there a use case where a SVA user can access kernel memory in the
> > >>> first place?
> > >> No. It should be fully blocked.
> > >>
> > > Then I don't understand what is the "vulnerability condition" being
> > > addressed here. We are talking about KVA range here.
> >
> > SVA users can't access kernel memory, but they can compel walks of
> > kernel page tables, which the IOMMU caches. The trouble starts if the
> > kernel happens to free that page table page and the IOMMU is using the
> > cache after the page is freed.
> >
> > That was covered in the changelog, but I guess it could be made a bit
> > more succinct.
> >
>
> Is it worth just never freeing the page tables used for vmalloc() memory?
> After all they are likely to be reallocated again.
>
>
Do we free? Maybe on some arches? According to my tests(AMD x86-64) i did
once upon a time, the PTE entries were not freed after vfree(). It could be
expensive if we did it, due to a global "page_table_lock" lock.
I see one place though, it is in the vmap_try_huge_pud()
if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
return 0;
it is when replace a pud by a huge-page.
--
Uladzislau Rezki
Powered by blists - more mailing lists