lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEkKfci_spdEAQKk1o_qoS53y-=i_zjqrDKeyW44AG+BQ@mail.gmail.com>
Date:   Fri, 5 Feb 2021 14:38:53 -0800
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     John Stultz <john.stultz@...aro.org>
Cc:     Christian König <christian.koenig@....com>,
        lkml <linux-kernel@...r.kernel.org>,
        Daniel Vetter <daniel@...ll.ch>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Liam Mark <lmark@...eaurora.org>,
        Chris Goldsworthy <cgoldswo@...eaurora.org>,
        Laura Abbott <labbott@...nel.org>,
        Brian Starkey <Brian.Starkey@....com>,
        Hridya Valsaraju <hridya@...gle.com>,
        Sandeep Patil <sspatil@...gle.com>,
        Daniel Mentz <danielmentz@...gle.com>,
        Ørjan Eide <orjan.eide@....com>,
        Robin Murphy <robin.murphy@....com>,
        Ezequiel Garcia <ezequiel@...labora.com>,
        Simon Ser <contact@...rsion.fr>,
        James Jones <jajones@...dia.com>,
        linux-media <linux-media@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation

On Fri, Feb 5, 2021 at 12:47 PM John Stultz <john.stultz@...aro.org> wrote:
>
> On Fri, Feb 5, 2021 at 12:47 AM Christian König
> <christian.koenig@....com> wrote:
> > Am 05.02.21 um 09:06 schrieb John Stultz:
> > > diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.c
> > > new file mode 100644
> > > index 000000000000..2139f86e6ca7
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/page_pool.c
> > > @@ -0,0 +1,220 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> >
> > Please use a BSD/MIT compatible license if you want to copy this from
> > the TTM code.
>
> Hrm. This may be problematic, as it's not just TTM code, but some of
> the TTM logic integrated into a page-pool implementation I wrote based
> on logic from the ION code (which was GPL-2.0 before it was dropped).
> So I don't think I can just make it MIT.  Any extra context on the
> need for MIT, or suggestions on how to best resolve this?
>
> > > +int drm_page_pool_get_size(struct drm_page_pool *pool)
> > > +{
> > > +     int ret;
> > > +
> > > +     spin_lock(&pool->lock);
> > > +     ret = pool->count;
> > > +     spin_unlock(&pool->lock);
> >
> > Maybe use an atomic for the count instead?
> >
>
> I can do that, but am curious as to the benefit? We are mostly using
> count where we already have to take the pool->lock anyway, and this
> drm_page_pool_get_size() is only used for debugfs output so far, so I
> don't expect it to be a hot path.
>
>
> > > +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page)
> > > +{
> > > +     spin_lock(&pool->lock);
> > > +     list_add_tail(&page->lru, &pool->items);
> > > +     pool->count++;
> > > +     atomic_long_add(1 << pool->order, &total_pages);
> > > +     spin_unlock(&pool->lock);
> > > +
> > > +     mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
> > > +                         1 << pool->order);
> >
> > Hui what? What should that be good for?
>
> This is a carryover from the ION page pool implementation:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/android/ion/ion_page_pool.c?h=v5.10#n28
>
> My sense is it helps with the vmstat/meminfo accounting so folks can
> see the cached pages are shrinkable/freeable. This maybe falls under
> other dmabuf accounting/stats discussions, so I'm happy to remove it
> for now, or let the drivers using the shared page pool logic handle
> the accounting themselves?

Yep, ION pools were accounted for as reclaimable kernel memory because
they could be dropped when the system is under memory pressure.

>
>
> > > +static struct page *drm_page_pool_remove(struct drm_page_pool *pool)
> > > +{
> > > +     struct page *page;
> > > +
> > > +     if (!pool->count)
> > > +             return NULL;
> >
> > Better use list_first_entry_or_null instead of checking the count.
> >
> > This way you can also pull the lock into the function.
>
> Yea, that cleans a number of things up nicely. Thank you!
>
>
> > > +struct drm_page_pool *drm_page_pool_create(unsigned int order,
> > > +                                        int (*free_page)(struct page *p, unsigned int order))
> > > +{
> > > +     struct drm_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL);
> >
> > Why not making this an embedded object? We should not see much dynamic
> > pool creation.
>
> Yea, it felt cleaner at the time this way, but I think I will need to
> switch to an embedded object in order to resolve the memory usage
> issue you pointed out with growing the ttm_pool_dma, so thank you for
> the suggestion!
>
>
> > > +void drm_page_pool_destroy(struct drm_page_pool *pool)
> > > +{
> > > +     struct page *page;
> > > +
> > > +     /* Remove us from the pool list */
> > > +     mutex_lock(&pool_list_lock);
> > > +     list_del(&pool->list);
> > > +     mutex_unlock(&pool_list_lock);
> > > +
> > > +     /* Free any remaining pages in the pool */
> > > +     spin_lock(&pool->lock);
> >
> > Locking should be unnecessary when the pool is destroyed anyway.
>
> I guess if we've already pruned ourself from the pool list, then your
> right, we can't race with the shrinker and it's maybe not necessary.
> But it also seems easier to consistently follow the locking rules in a
> very unlikely path rather than leaning on subtlety.  Either way, I
> think this becomes moot if I make the improvements you suggest to
> drm_page_pool_remove().
>
> > > +static int drm_page_pool_shrink_one(void)
> > > +{
> > > +     struct drm_page_pool *pool;
> > > +     struct page *page;
> > > +     int nr_freed = 0;
> > > +
> > > +     mutex_lock(&pool_list_lock);
> > > +     pool = list_first_entry(&pool_list, typeof(*pool), list);
> > > +
> > > +     spin_lock(&pool->lock);
> > > +     page = drm_page_pool_remove(pool);
> > > +     spin_unlock(&pool->lock);
> > > +
> > > +     if (page)
> > > +             nr_freed = drm_page_pool_free_pages(pool, page);
> > > +
> > > +     list_move_tail(&pool->list, &pool_list);
> >
> > Better to move this up, directly after the list_first_entry().
>
> Sounds good!
>
> Thanks so much for your review and feedback! I'll try to get some of
> the easy suggestions integrated, and will have to figure out what to
> do about the re-licensing request.
>
> thanks
> -john

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ