[<prev] [next>] [day] [month] [year] [list]
Message-ID: <a7c9265b-1218-264b-b67d-5d80e44fb7d4@oracle.com>
Date: Tue, 30 Oct 2018 07:48:07 -0700
From: Joe Jin <joe.jin@...cle.com>
To: Paul Durrant <Paul.Durrant@...rix.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc: John Sobecki <john.sobecki@...cle.com>,
"DONGLI.ZHANG" <dongli.zhang@...cle.com>,
"linux-kernel@...r.kernel.org\"" <linux-kernel@...r.kernel.org>,
"konrad@...nel.org" <konrad@...nel.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
Christoph Helwig <hch@....de>
Subject: Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen only
when pages are contiguous
On 10/30/18 7:21 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@...ts.xenproject.org] On Behalf
>> Of Joe Jin
>> Sent: 30 October 2018 14:13
>> To: Paul Durrant <Paul.Durrant@...rix.com>; Boris Ostrovsky
>> <boris.ostrovsky@...cle.com>; Konrad Rzeszutek Wilk
>> <konrad.wilk@...cle.com>
>> Cc: John Sobecki <john.sobecki@...cle.com>; DONGLI.ZHANG
>> <dongli.zhang@...cle.com>; linux-kernel@...r.kernel.org" <linux-
>> kernel@...r.kernel.org>; konrad@...nel.org; xen-
>> devel@...ts.xenproject.org; Christoph Helwig <hch@....de>
>> Subject: Re: [Xen-devel] [PATCH] xen-swiotlb: exchange memory with Xen
>> only when pages are contiguous
>>
>> On 10/30/18 1:59 AM, Paul Durrant wrote:
>>>> On 10/25/18 11:56 AM, Joe Jin wrote:
>>>>> I just discussed this patch with Boris in private, his opinions(Boris,
>>>>> please correct me if any misunderstood) are:
>>>>>
>>>>> 1. With/without the check, both are incorrect, he thought we need to
>>>>> prevented unalloc'd free at here.
>>>>> 2. On freeing, if upper layer already checked the memory was DMA-able,
>>>>> the checking at here does not make sense, we can remove all checks.
>>>>> 3. xen_create_contiguous_region() and xen_destroy_contiguous_region()
>>>>> to come in pairs.
>>>> I tried to added radix_tree to track allocating/freeing and I found
>> some
>>>> memory only allocated but was not freed, I guess it caused by driver
>> used
>>>> dma_pool, that means if lots of such requests, the list will consume
>> lot
>>>> of memory for it. Will continue to work on it, if anyone have good idea
>>>> for it please let me know, I'd like to try it.
>>>>
>>> FWIW, in my Xen PV-IOMMU test patches, I have also tried keeping a list
>> of ranges mapped for DMA and have discovered apparent issues with some
>> drivers, particularly tg3, that seem to free mappings that have not been
>> allocated (or possibly double-free). I've never fully tracked down the
>> issue.
>>
>> Call trace of first called xen_swiotlb_alloc_coherent(The pages never
>> backed to Xen):
>>
>> [ 23.436333] [<ffffffff814040c9>]
>> xen_swiotlb_alloc_coherent+0x169/0x510
>> [ 23.436623] [<ffffffff811eb38d>] ? kmem_cache_alloc_trace+0x1ed/0x280
>> [ 23.436900] [<ffffffff811d72af>] dma_pool_alloc+0x11f/0x260
>> [ 23.437190] [<ffffffff81537442>] ehci_qh_alloc+0x52/0x120
>> [ 23.437481] [<ffffffff8153b80f>] ehci_setup+0x2bf/0x8e0
>> [ 23.437760] [<ffffffff81476d06>] ? __dev_printk+0x46/0xa0
>> [ 23.438042] [<ffffffff814770b3>] ? _dev_info+0x53/0x60
>> [ 23.438327] [<ffffffff8153f620>] ehci_pci_setup+0xc0/0x5f0
>> [ 23.438615] [<ffffffff81519fcd>] usb_add_hcd+0x25d/0xaf0
>> [ 23.438901] [<ffffffff8152c9a6>] usb_hcd_pci_probe+0x406/0x520
>> [ 23.439177] [<ffffffff8153f486>] ehci_pci_probe+0x36/0x40
>> [ 23.439469] [<ffffffff8136e99a>] local_pci_probe+0x4a/0xb0
>> [ 23.439752] [<ffffffff8136fba5>] ? pci_match_device+0xe5/0x110
>> [ 23.440027] [<ffffffff8136fce1>] pci_device_probe+0xd1/0x120
>> [ 23.440320] [<ffffffff8147b13c>] driver_probe_device+0x20c/0x4d0
>> [ 23.440599] [<ffffffff8147b4eb>] __driver_attach+0x9b/0xa0
>> [ 23.440879] [<ffffffff8147b450>] ? __device_attach+0x50/0x50
>>
>> Above was EHCI used DMA pool to allocate DMA memory.
>>
>> During my testing, ~1000 entries was not freed, if more PCI devices
>> use DMA pool, the tree/list will have more entries, looks it's not a
>> good idea that use a list to track it.
>>
>
> Yes, it seems pools can hang onto a serious number of allocations so a list is probably not wise.
I agree with you.
> What I was pointing out, though, is that it appears you can't even track mappings (as opposed to allocations) with a list.
Right.
> either because drivers apparently try to unmap things they have not mapped.
If this happened, should be fixed by driver :)
Thanks,
Joe
>
> Paul
>
Powered by blists - more mailing lists