lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6a472a24-a40f-1160-70dd-5cb9e9ae85f1@amd.com>
Date:   Thu, 1 Jul 2021 08:52:28 +0200
From:   Christian König <christian.koenig@....com>
To:     John Stultz <john.stultz@...aro.org>
Cc:     lkml <linux-kernel@...r.kernel.org>,
        Daniel Vetter <daniel@...ll.ch>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Liam Mark <lmark@...eaurora.org>,
        Chris Goldsworthy <cgoldswo@...eaurora.org>,
        Laura Abbott <labbott@...nel.org>,
        Brian Starkey <Brian.Starkey@....com>,
        Hridya Valsaraju <hridya@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Sandeep Patil <sspatil@...gle.com>,
        Daniel Mentz <danielmentz@...gle.com>,
        Ørjan Eide <orjan.eide@....com>,
        Robin Murphy <robin.murphy@....com>,
        Ezequiel Garcia <ezequiel@...labora.com>,
        Simon Ser <contact@...rsion.fr>,
        James Jones <jajones@...dia.com>,
        linux-media <linux-media@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation

Am 01.07.21 um 00:24 schrieb John Stultz:
> On Wed, Jun 30, 2021 at 2:10 AM Christian König
> <christian.koenig@....com> wrote:
>> Am 30.06.21 um 03:34 schrieb John Stultz:
>>> +static unsigned long page_pool_size; /* max size of the pool */
>>> +
>>> +MODULE_PARM_DESC(page_pool_size, "Number of pages in the drm page pool");
>>> +module_param(page_pool_size, ulong, 0644);
>>> +
>>> +static atomic_long_t nr_managed_pages;
>>> +
>>> +static struct mutex shrinker_lock;
>>> +static struct list_head shrinker_list;
>>> +static struct shrinker mm_shrinker;
>>> +
>>> +/**
>>> + * drm_page_pool_set_max - Sets maximum size of all pools
>>> + *
>>> + * Sets the maximum number of pages allows in all pools.
>>> + * This can only be set once, and the first caller wins.
>>> + */
>>> +void drm_page_pool_set_max(unsigned long max)
>>> +{
>>> +     if (!page_pool_size)
>>> +             page_pool_size = max;
>>> +}
>>> +
>>> +/**
>>> + * drm_page_pool_get_max - Maximum size of all pools
>>> + *
>>> + * Return the maximum number of pages allows in all pools
>>> + */
>>> +unsigned long drm_page_pool_get_max(void)
>>> +{
>>> +     return page_pool_size;
>>> +}
>> Well in general I don't think it is a good idea to have getters/setters
>> for one line functionality, similar applies to locking/unlocking the
>> mutex below.
>>
>> Then in this specific case what those functions do is to aid
>> initializing the general pool manager and that in turn should absolutely
>> not be exposed.
>>
>> The TTM pool manager exposes this as function because initializing the
>> pool manager is done in one part of the module and calculating the
>> default value for the pages in another one. But that is not something I
>> would like to see here.
> So, I guess I'm not quite clear on what you'd like to see...
>
> Part of what I'm balancing here is the TTM subsystem normally sets a
> global max size, whereas the old ION pool didn't have caps (instead
> just relying on the shrinker when needed).
> So I'm trying to come up with a solution that can serve both uses. So
> I've got this drm_page_pool_set_max() function to optionally set the
> maximum value, which is called in the TTM initialization path or set
> the boot argument. But for systems that use the dmabuf system heap,
> but don't use TTM, no global limit is enforced.

Yeah, exactly that's what I'm trying to prevent.

See if we have the same functionality used by different use cases we 
should not have different behavior depending on what drivers are loaded.

Is it a problem if we restrict the ION pool to 50% of system memory as 
well? If yes than I would rather drop the limit from TTM and only rely 
on the shrinker there as well.

> Your earlier suggestion to have it as an argument to the
> drm_page_pool_shrinker_init() didn't make much sense to me, as then we
> may have multiple subsystems trying to initialize the pool shrinker,
> which doesn't seem ideal. So I'm not sure what would be preferred.
>
> I guess we could have subsystems allocate their own pool managers with
> their own shrinkers and per-manager-size-limits? But that also feels
> like a fair increase in complexity, for I'm not sure how much benefit.
>
>>> +void drm_page_pool_add(struct drm_page_pool *pool, struct page *p)
>>> +{
>>> +     unsigned int i, num_pages = 1 << pool->order;
>>> +
>>> +     /* Make sure we won't grow larger then the max pool size */
>>> +     if (page_pool_size &&
>>> +            ((drm_page_pool_get_total()) + num_pages > page_pool_size)) {
>>> +             pool->free(pool, p);
>>> +             return;
>>> +     }
>> That is not a good idea. See how ttm_pool_free() does this.
>>
>> First the page is given back to the pool, then all pools are shrinked if
>> they are above the global limit.
> Ok, initially my thought was it seemed like wasteful overhead to add
> the page to the pool and then shrink the pool, so just freeing the
> given page directly if the pool was full seemed more straightforward.
> But on consideration here I do realize having most-recently-used hot
> pages in the pool would be good for performance, so I'll switch that
> back. Thanks for pointing this out!

The even bigger problem is that you then always drop pages from the 
active pools.

E.g. a pool which just allocated and then freed 2MiB during driver load 
for some firmware upload will never see pressure if you do it this way.

So those 2MiB would never be recycled while they could be well used in 
one of the active pools.

Regards,
Christian.

>
> Thanks again so much for the review and feedback!
> -john

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ