lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eccb5b60-b831-c4bd-6c61-4867296e1232@arm.com>
Date:   Tue, 4 Dec 2018 17:19:37 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     John Garry <john.garry@...wei.com>, hch@....de
Cc:     m.szyprowski@...sung.com, iommu@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, cai@....us, salil.mehta@...wei.com
Subject: Re: [PATCH 3/4] dma-debug: Dynamically expand the dma_debug_entry
 pool

On 04/12/2018 16:30, John Garry wrote:
> On 04/12/2018 13:11, Robin Murphy wrote:
>> Hi John,
>>
>> On 03/12/2018 18:23, John Garry wrote:
>>> On 03/12/2018 17:28, Robin Murphy wrote:
>>>> Certain drivers such as large multi-queue network adapters can use 
>>>> pools
>>>> of mapped DMA buffers larger than the default dma_debug_entry pool of
>>>> 65536 entries, with the result that merely probing such a device can
>>>> cause DMA debug to disable itself during boot unless explicitly 
>>>> given an
>>>> appropriate "dma_debug_entries=..." option.
>>>>
>>>> Developers trying to debug some other driver on such a system may 
>>>> not be
>>>> immediately aware of this, and at worst it can hide bugs if they 
>>>> fail to
>>>> realise that dma-debug has already disabled itself unexpectedly by the
>>>> time the code of interest gets to run. Even once they do realise, it 
>>>> can
>>>> be a bit of a pain to emprirically determine a suitable number of
>>>> preallocated entries to configure without massively over-allocating.
>>>>
>>>> There's really no need for such a static limit, though, since we can
>>>> quite easily expand the pool at runtime in those rare cases that the
>>>> preallocated entries are insufficient, which is arguably the least
>>>> surprising and most useful behaviour.
>>>
>>> Hi Robin,
>>>
>>> Do you have an idea on shrinking the pool again when the culprit
>>> driver is removed, i.e. we have so many unused debug entries now
>>> available?
>>
>> I honestly don't believe it's worth the complication. This is a
>> development feature with significant overheads already, so there's not
>> an awful lot to gain by trying to optimise memory usage. If a system can
>> ever load a driver that makes hundreds of thousands of simultaneous
>> mappings, it can almost certainly spare 20-odd megabytes of RAM for the
>> corresponding debug entries in perpetuity. Sure, it does mean you'd need
>> to reboot to recover memory from a major leak, but that's mostly true of
>> the current behaviour too, and rebooting during driver development is
>> hardly an unacceptable inconvenience.
>>
> 
> ok, I just thought that it would not be too difficult to implement this 
> on the dma entry free path.

True, in the current code it wouldn't be all that hard, but it feels 
more worthwhile to optimise for allocation rather than freeing, and as 
soon as we start allocating memory for multiple entries at once, trying 
to free anything becomes extremely challenging.

>> In fact, having got this far in, what I'd quite like to do is to get rid
>> of dma_debug_resize_entries() such that we never need to free things at
>> all, since then we could allocate whole pages as blocks of entries to
>> save on masses of individual slab allocations.
>>
> 
> On a related topic, is it possible for the user to learn the total 
> entries created at a given point in time? If not, could we add a file in 
> the debugfs folder for this?

I did get as far as pondering that you effectively lose track of 
utilisation once the low-water-mark of min_free_entries hits 0 and stays 
there - AFAICS it should be sufficient to just expose nr_total_entries 
as-is, since users can then calculate current and maximum occupancy 
based on *_free_entries. Does that sound reasonable to you?

That also indirectly reminds me that this lot is documented in 
DMA_API.txt, so I should be good and update that too...

Cheers,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ