lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3befca2-fc65-81b8-c9ed-311de7967958@oracle.com>
Date:   Wed, 5 Sep 2018 07:14:26 +0800
From:   Dongli Zhang <dongli.zhang@...cle.com>
To:     "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>
Cc:     Joe Jin <joe.jin@...cle.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        John Sobecki <john.sobecki@...cle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        stable@...r.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-swiotlb: use actually allocated size on
 check physical contiguous

Below module would help people reproduce the issue to understand the symptom:

https://github.com/finallyjustice/patchset/blob/master/xen-swiotlb-panic.c

In addition, on the xen hypervisor side, the memory_exchange() in xen hypervisor
does not check if the the pfn of input mfn belong to the same extent are
continuous in guest domain. As a result, the wrong page is stolen from guest domain.

Can we assume it is fine to not check if pfn of mfn are continuous in xen
hypervisor?

I think it is OK as the guest domain is responsible to maintain its memory
pages. xen is happy to serve any operations from guest domain, unless such
operation is harmful to xen hypervisor or other domains. In the worst cause, the
dom0 (or domU with passthrough) would panic itself.

 496 static long memory_exchange(...
... ...
 609         /* Steal a chunk's worth of input pages from the domain. */
 610         for ( j = 0; j < (1UL << in_chunk_order); j++ )
 611         {
 612             if ( unlikely(__copy_from_guest_offset(
 613                 &gmfn, exch.in.extent_start, (i<<in_chunk_order)+j, 1)) )
 614             {
 615                 rc = -EFAULT;
 616                 goto fail;
 617             }
 618

 -------> not checking if the gfn of gmfn are continuous in below 'for loop' for
each extent.

 619             for ( k = 0; k < (1UL << exch.in.extent_order); k++ )
 620             {
 621 #ifdef CONFIG_X86
 622                 p2m_type_t p2mt;
 623
 624                 /* Shared pages cannot be exchanged */
 625                 mfn = get_gfn_unshare(d, gmfn + k, &p2mt);
 626                 if ( p2m_is_shared(p2mt) )
 627                 {
 628                     put_gfn(d, gmfn + k);
 629                     rc = -ENOMEM;
 630                     goto fail;
 631                 }
 632 #else /* !CONFIG_X86 */
 633                 mfn = gfn_to_mfn(d, _gfn(gmfn + k));
 634 #endif
 635                 if ( unlikely(!mfn_valid(mfn)) )
 636                 {
 637                     put_gfn(d, gmfn + k);
 638                     rc = -EINVAL;
 639                     goto fail;
 640                 }
 641
 642                 page = mfn_to_page(mfn);
 643

-----> As a result, the wrong page is stolen.

 644                 rc = steal_page(d, page, MEMF_no_refcount);
 645                 if ( unlikely(rc) )
 646                 {
 647                     put_gfn(d, gmfn + k);
 648                     goto fail;
 649                 }
 650
 651                 page_list_add(page, &in_chunk_list);
 652                 put_gfn(d, gmfn + k);
 653             }
 654         }


Dongli Zhang



On 09/05/2018 02:16 AM, Joe Jin wrote:
> xen_swiotlb_{alloc,free}_coherent() actually allocate/free size by order
> but used the required size to check if address is physical contiguous,
> if first pages are physical contiguous also passed
> range_straddles_page_boundary() check, but others were not it will
> lead kernel panic.
> 
> Signed-off-by: Joe Jin <joe.jin@...cle.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
> ---
>  drivers/xen/swiotlb-xen.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index a6f9ba85dc4b..aa081f806728 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>  	*/
>  	flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
>  
> +	/* Convert the size to actually allocated. */
> +	size = 1UL << (order + XEN_PAGE_SHIFT);
> +
>  	/* On ARM this function returns an ioremap'ped virtual address for
>  	 * which virt_to_phys doesn't return the corresponding physical
>  	 * address. In fact on ARM virt_to_phys only works for kernel direct
> @@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>  	 * physical address */
>  	phys = xen_bus_to_phys(dev_addr);
>  
> +	/* Convert the size to actually allocated. */
> +	size = 1UL << (order + XEN_PAGE_SHIFT);
> +
>  	if (((dev_addr + size - 1 <= dma_mask)) ||
>  	    range_straddles_page_boundary(phys, size))
>  		xen_destroy_contiguous_region(phys, order);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ