lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ce7320b6-68f3-43b1-8812-3a5bbd75c9c6@suse.com>
Date: Wed, 12 Feb 2025 12:11:22 +0100
From: Jürgen Groß <jgross@...e.com>
To: Jan Beulich <jbeulich@...e.com>
Cc: Stefano Stabellini <sstabellini@...nel.org>,
 Boris Ostrovsky <boris.ostrovsky@...cle.com>,
 Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
 xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
 x86@...nel.org, iommu@...ts.linux.dev
Subject: Re: [PATCH 2/2] xen/swiotlb: don't destroy contiguous region in all
 cases

On 12.02.25 08:38, Jan Beulich wrote:
> On 11.02.2025 13:04, Juergen Gross wrote:
>> In case xen_swiotlb_alloc_coherent() needed to create a contiguous
>> region only for other reason than the memory not being compliant with
>> the device's DMA mask, there is no reason why this contiguous region
>> should be destroyed by xen_swiotlb_free_coherent() later. Destroying
>> this region should be done only, if the memory of the region was
>> allocated with more stringent placement requirements than the memory
>> it did replace.
> 
> I'm not convinced of this: Even the mere property of being contiguous
> may already be enough to warrant freeing when possible. The hypervisor
> may not have that many contiguous areas available. The bigger the
> chunk, the more important to give it back once no longer needed in
> this shape.

Really? When creating a domain Xen tries to use GB pages and 2MB pages as
much as possible. Why would this special case here have more restrictions?

> Plus also take into account how Xen behaves here: It specifically tries
> to hold back, during boot, lower addressed memory to later satisfy such
> requests. Hence even if you don't ask for address restricted memory,
> you may get back such. You'd need to compare input and output addresses,
> not input addresses and requested restriction to alleviate this.

Fair enough.

> 
>> --- a/arch/x86/xen/mmu_pv.c
>> +++ b/arch/x86/xen/mmu_pv.c
>> @@ -2208,19 +2208,22 @@ void __init xen_init_mmu_ops(void)
>>   static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER];
>>   
>>   #define VOID_PTE (mfn_pte(0, __pgprot(0)))
>> -static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
>> -				unsigned long *in_frames,
>> -				unsigned long *out_frames)
>> +static int xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
>> +			     unsigned long *in_frames,
>> +			     unsigned long *out_frames)
>>   {
>>   	int i;
>> +	u64 address_bits = 0;
> 
> First I was inclined to suggest to use paddr_t here, but ...
> 
>>   	struct multicall_space mcs;
>>   
>>   	xen_mc_batch();
>>   	for (i = 0; i < (1UL<<order); i++, vaddr += PAGE_SIZE) {
>>   		mcs = __xen_mc_entry(0);
>>   
>> -		if (in_frames)
>> +		if (in_frames) {
>>   			in_frames[i] = virt_to_mfn((void *)vaddr);
>> +			address_bits |= in_frames[i] << PAGE_SHIFT;
> 
> ... why do a shift on every loop iteration when you can ...
> 
>> +		}
>>   
>>   		MULTI_update_va_mapping(mcs.mc, vaddr, VOID_PTE, 0);
>>   		__set_phys_to_machine(virt_to_pfn((void *)vaddr), INVALID_P2M_ENTRY);
>> @@ -2229,6 +2232,8 @@ static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
>>   			out_frames[i] = virt_to_pfn((void *)vaddr);
>>   	}
>>   	xen_mc_issue(0);
>> +
>> +	return fls64(address_bits);
> 
> ... simply add in PAGE_SHIFT here, once?

True.

> 
>> @@ -2321,7 +2326,8 @@ static int xen_exchange_memory(unsigned long extents_in, unsigned int order_in,
>>   
>>   int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
>>   				 unsigned int address_bits,
>> -				 dma_addr_t *dma_handle)
>> +				 dma_addr_t *dma_handle,
>> +				 unsigned int *address_bits_in)
>>   {
>>   	unsigned long *in_frames = discontig_frames, out_frame;
>>   	unsigned long  flags;
>> @@ -2336,7 +2342,7 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
>>   	spin_lock_irqsave(&xen_reservation_lock, flags);
>>   
>>   	/* 1. Zap current PTEs, remembering MFNs. */
>> -	xen_zap_pfn_range(vstart, order, in_frames, NULL);
>> +	*address_bits_in = xen_zap_pfn_range(vstart, order, in_frames, NULL);
> 
> Nit: Converting plain int to unsigned int, when there's no real reason
> to do any conversion. Since xen_zap_pfn_range() can't return a negative
> value for the caller caring about the return value (yet more obviously
> so with the suggested adjustment, and then true for both callers), the
> function could easily return unsigned int.

Will change that.


Juergen

Download attachment "OpenPGP_0xB0DE9DD628BF132F.asc" of type "application/pgp-keys" (3684 bytes)

Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (496 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ