lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23336c399cd31738750fa060854f5a076b722300.camel@bootlin.com>
Date:   Thu, 04 Apr 2019 14:38:38 +0200
From:   Paul Kocialkowski <paul.kocialkowski@...tlin.com>
To:     Eric Anholt <eric@...olt.net>, dri-devel@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org
Cc:     David Airlie <airlied@...ux.ie>, Daniel Vetter <daniel@...ll.ch>,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Maxime Ripard <maxime.ripard@...tlin.com>,
        Eben Upton <eben@...pberrypi.org>,
        Daniel Stone <daniel@...ishbar.org>
Subject: Re: [PATCH v4 4/4] drm/vc4: Allocate binner bo when starting to use
 the V3D

Hi,

Le mercredi 03 avril 2019 à 11:53 -0700, Eric Anholt a écrit :
> Paul Kocialkowski <paul.kocialkowski@...tlin.com> writes:
> 
> > The binner bo is not required until the V3D is in use, so avoid
> > allocating it at probe and do it on the first non-dumb BO allocation.
> > Keep track of which clients are using the V3D and liberate the buffer
> > when there is none left.
> > 
> > We also want to keep it alive during runtime suspend/resume to avoid
> > failing to allocate it at resume. This happens when the CMA pool is
> > full at that point and results in a hard crash.
> > 
> > Signed-off-by: Paul Kocialkowski <paul.kocialkowski@...tlin.com>
> > ---
> >  drivers/gpu/drm/vc4/vc4_bo.c  | 32 ++++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/vc4/vc4_drv.c |  9 +++++++++
> >  drivers/gpu/drm/vc4/vc4_drv.h |  4 ++++
> >  drivers/gpu/drm/vc4/vc4_v3d.c | 13 -------------
> >  4 files changed, 45 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> > index 88ebd681d7eb..b941f09b9378 100644
> > --- a/drivers/gpu/drm/vc4/vc4_bo.c
> > +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> > @@ -799,6 +799,30 @@ vc4_prime_import_sg_table(struct drm_device *dev,
> >  	return obj;
> >  }
> >  
> > +static int vc4_prepare_bin_bo(struct drm_device *dev,
> > +			      struct drm_file *file_priv)
> > +{
> > +	struct vc4_file *vc4file = file_priv->driver_priv;
> > +	struct vc4_dev *vc4 = to_vc4_dev(dev);
> > +	int ret;
> > +
> > +	if (!vc4->v3d)
> > +		return -ENODEV;
> > +
> > +	if (!vc4file->needs_bin_bo) {
> > +		atomic_inc(&vc4->bin_bo_usecnt);
> > +		vc4file->needs_bin_bo = true;
> > +	}
> > +
> > +	if (!vc4->bin_bo) {
> > +		ret = vc4_v3d_allocate_bin_bo(vc4);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> 
> This atomic usage looks really racy.  For example, multiple clients
> could call allocate at the same time and leak one.  Or this timeline:
> 
> us           them
>              dec count to 0
> inc count
> check bin_bo
>              free bin_bo

Oh, you're definitely right. Sorry I missed that.

> vc4_v3d_allocate_bin_bo should probably be a vc4_v3d_bin_bo_get()
> returning a kref on the BO, called under a lock protecting both one
> file_priv being dereferenced by multiple threads in the kernel at the
> same time (so file_priv doesn't try to double-get its ref) and multiple
> file_privs trying to get the bin_bo at once.

Sounds good, I'll look into it and spin up a new revision soon.

Cheers,

Paul

-- 
Paul Kocialkowski, Bootlin
Embedded Linux and kernel engineering
https://bootlin.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ