[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160316090221.02d0a699@recife.lan>
Date: Wed, 16 Mar 2016 09:02:21 -0300
From: Mauro Carvalho Chehab <mchehab@....samsung.com>
To: Sakari Ailus <sakari.ailus@....fi>
Cc: Shuah Khan <shuahkh@....samsung.com>, kyungmin.park@...sung.com,
a.hajda@...sung.com, s.nawrocki@...sung.com, kgene@...nel.org,
k.kozlowski@...sung.com, laurent.pinchart@...asonboard.com,
hyun.kwon@...inx.com, soren.brinkmann@...inx.com,
gregkh@...uxfoundation.org, perex@...ex.cz, tiwai@...e.com,
hans.verkuil@...co.com, lixiubo@...s.chinamobile.com,
javier@....samsung.com, g.liakhovetski@....de,
chehabrafael@...il.com, crope@....fi, tommi.franttila@...el.com,
dan.carpenter@...cle.com, prabhakar.csengg@...il.com,
hamohammed.sa@...il.com, der.herr@...r.at, navyasri.tech@...il.com,
Julia.Lawall@...6.fr, amitoj1606@...il.com,
linux-media@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-samsung-soc@...r.kernel.org,
linux-renesas-soc@...r.kernel.org, devel@...verdev.osuosl.org,
alsa-devel@...a-project.org
Subject: Re: [PATCH] media: add GFP flag to media_*() that could get called
in atomic context
Em Wed, 16 Mar 2016 10:28:35 +0200
Sakari Ailus <sakari.ailus@....fi> escreveu:
> Hi Mauro,
>
> On Tue, Mar 15, 2016 at 12:55:35PM -0300, Mauro Carvalho Chehab wrote:
> > Em Mon, 14 Mar 2016 14:09:09 +0200
> > Sakari Ailus <sakari.ailus@....fi> escreveu:
> >
> > > Hi Mauro,
> > >
...
> > > Notify callbacks, perhaps not, but the list is still protected by the
> > > spinlock. It perhaps is not likely that another process would change it but
> > > I don't think we can rely on that.
> >
> > I can see only 2 risks protected by the lock:
> >
> > 1) mdev gets freed while an entity is being created. This is a problem
> > with the current memory protection schema we're using. I guess the
> > only way to fix it is to use kref for mdev/entities/interfaces/links/pads.
> > This change doesn't make it better or worse.
> > Also, I don't think we have such risk with the current devices.
> >
> > 2) a notifier may be inserted or removed by another driver, while the
> > loop is running.
> >
> > To avoid (2), I see 3 alternatives:
> >
> > a) keep the loop as proposed on this patch. As the list is navigated using
> > list_for_each_entry_safe(), I guess[1] it should be safe to remove/add
> > new notify callbacks there while the loop is running by some other process.
>
> list_for_each_entry_safe() does not protect against concurrent access, only
> against adding and removing list entries by the same user. List access
> serialisation is still needed, whether you use _safe() functions or not.
>
> >
> > [1] It *is* safe if the change were done inside the loop - but I'm not
> > 100% sure that it is safe if some other CPU touches the notify list.
>
> Indeed.
>
> >
> > b) Unlock/relock the spinlock every time:
> >
> > /* previous code that locks mdev->lock spinlock */
> >
> > /* invoke entity_notify callbacks */
> > list_for_each_entry_safe(notify, next, &mdev->entity_notify, list) {
> > spin_unlock(&mdev->lock);
> > (notify)->notify(entity, notify->notify_data);
> > spin_lock(&mdev->lock);
> > }
> >
> > spin_unlock(&mdev->lock);
> >
> > c) use a separate lock for the notify list -this seems to be an overkill.
> >
> > d) Protect it with the graph traversal mutex. That sounds the worse idea,
> > IMHO, as we'll be abusing the lock.
>
> I'd simply replace the spinlock with a mutex here. As we want to get rid of
> the graph mutex anyway in the long run, let's not mix the two as they're
> well separated now. As long as the mutex users do not sleep (i.e. the
> notify() callback) the mutex is about as fast to use as the spinlock.
It could work. I added such patch on an experimental branch, where
I'm addressing a few troubles with au0828 unbind logic:
https://git.linuxtv.org/mchehab/experimental.git/log/?h=au0828-unbind-fixes
The patch itself is at:
https://git.linuxtv.org/mchehab/experimental.git/commit/?h=au0828-unbind-fixes&id=dba4d41bdfa6bb8dc51cb0f692102919b2b7c8b4
At least for au0828, it seems to work fine.
Regards,
Mauro
Powered by blists - more mailing lists