[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160802165732.GA3310@localhost>
Date: Tue, 2 Aug 2016 09:57:33 -0700
From: Brian Norris <briannorris@...omium.org>
To: Lars-Peter Clausen <lars@...afoo.de>
Cc: Jonathan Cameron <jic23@...nel.org>,
Hartmut Knaack <knaack.h@....de>,
Peter Meerwald-Stadler <pmeerw@...erw.net>,
linux-iio@...r.kernel.org, linux-kernel@...r.kernel.org,
Guenter Roeck <linux@...ck-us.net>,
Brian Norris <computersforpeace@...il.com>
Subject: Re: iio: WARNING at kernel/sched/core.c:7630: do not call blocking
ops when !TASK_RUNNING
Hi Lars,
On Tue, Aug 02, 2016 at 03:06:39PM +0200, Lars-Peter Clausen wrote:
> On 08/02/2016 03:12 AM, Brian Norris wrote:
> > I'm seeing the following warnings when I read from an IIO char device,
> > with CONFIG_DEBUG_ATOMIC_SLEEP=y. I'm testing a v4.4 kernel, but AFAICT,
> > nothing too relevant has changed between that and v4.7:
> [...]
> > Have any of you seen this kind of issue before (perhaps most IIO users
> > are not using CONFIG_DEBUG_ATOMIC_SLEEP)? If the WARNING is really
> > correct, then this problem has really been around a while. It looks like
> > we have a wait_event_interruptible() called, with this call chain in the
> > 'condition' path:
> >
> > iio_buffer_ready()
> > -> iio_buffer_data_available() (i.e., iio_kfifo_buf_data_available())
> > -> mutex_lock()
> >
> > Calling mutex_lock() means we clobber the TASK_INTERRUPTIBLE state with
> > TASK_RUNNING -- hence, the WARNING. Should this be using a spinlock
> > instead? Or is there some way to refactor this to avoid calling these
> > sleeping functions in the wait_event*() condition?
>
> Hi,
>
> Yes, this is an issue, thanks for pointing this out. It has been there for a
> while, my fault, sorry for that. We need a solution like pointed out in this
> article (https://lwn.net/Articles/628628/).
Ah, thanks for the pointer. I thought this problem seemed familiar, but
I couldn't find a canonical solution. The wait_woken() solution looks
like a good starting point, although it's definitely got more
boilerplate... It also requires a 'timeout'; I guess we'd want
MAX_SCHEDULE_TIMEOUT for this case?
Do you want to cook a patch, or should I?
Brian
Powered by blists - more mailing lists