lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb0077bc-9dd2-46a6-a130-a2cea5e5628e@intel.com>
Date: Fri, 27 Jun 2025 07:03:12 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: pbonzini@...hat.com, seanjc@...gle.com, dave.hansen@...ux.intel.com,
 rick.p.edgecombe@...el.com, isaku.yamahata@...el.com, kai.huang@...el.com,
 yan.y.zhao@...el.com, chao.gao@...el.com, tglx@...utronix.de,
 mingo@...hat.com, bp@...en8.de, kvm@...r.kernel.org, x86@...nel.org,
 linux-coco@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 03/12] x86/virt/tdx: Allocate reference counters for
 PAMT memory

On 6/27/25 04:27, Kirill A. Shutemov wrote:
> On Wed, Jun 25, 2025 at 12:26:09PM -0700, Dave Hansen wrote:
>>> +static atomic_t *tdx_get_pamt_refcount(unsigned long hpa)
>>> +{
>>> +	return &pamt_refcounts[hpa / PMD_SIZE];
>>> +}
>>
>> "get refcount" usually means "get a reference". This is looking up the
>> location of the refcount.
>>
>> I think this needs a better name.
> 
> tdx_get_pamt_ref_ptr()?

How about:

	tdx_find_pamt_refcount()

>>> +	unsigned long vaddr;
>>> +	pte_t entry;
>>> +
>>> +	if (!pte_none(ptep_get(pte)))
>>> +		return 0;
>>
>> This ^ is an optimization, right? Could it be comment appropriately, please?
> 
> Not optimization.
> 
> Calls of apply_to_page_range() can overlap by one page due to
> round_up()/round_down() in alloc_pamt_refcount(). We don't need to
> populate these pages again if they are already populated.
> 
> Will add a comment.

But don't you check it again under the lock?

>>> +	vaddr = __get_free_page(GFP_KERNEL | __GFP_ZERO);
>>> +	if (!vaddr)
>>> +		return -ENOMEM;
>>> +
>>> +	entry = pfn_pte(PFN_DOWN(__pa(vaddr)), PAGE_KERNEL);
>>> +
>>> +	spin_lock(&init_mm.page_table_lock);
>>> +	if (pte_none(ptep_get(pte)))

Right there ^

>>> +		set_pte_at(&init_mm, addr, pte, entry);
>>> +	else
>>> +		free_page(vaddr);
>>> +	spin_unlock(&init_mm.page_table_lock);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int pamt_refcount_depopulate(pte_t *pte, unsigned long addr,
>>> +				    void *data)
>>> +{
>>> +	unsigned long vaddr;
>>> +
>>> +	vaddr = (unsigned long)__va(PFN_PHYS(pte_pfn(ptep_get(pte))));
>>
>> Gah, we really need a kpte_to_vaddr() helper here. This is really ugly.
>> How many of these are in the tree?
> 
> I only found such chain in KASAN code.
> 
> What about this?
> 
>       pte_t entry = ptep_get(pte);
>       struct page *page = pte_page(entry);
> 
> and use __free_page(page) instead free_page(vaddr)?
> 
> The similar thing can be don on allocation side.

That does look better.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ