[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120409122950.GA21833@lizard>
Date: Mon, 9 Apr 2012 16:29:50 +0400
From: Anton Vorontsov <anton.vorontsov@...aro.org>
To: Pekka Enberg <penberg@...nel.org>
Cc: Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
John Stultz <john.stultz@...aro.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org
Subject: Re: [PATCH 1/3] vmevent: Should not grab mutex in the atomic context
On Mon, Apr 09, 2012 at 11:40:31AM +0300, Pekka Enberg wrote:
> On Mon, 2012-04-09 at 03:38 +0400, Anton Vorontsov wrote:
> > vmevent grabs a mutex in the atomic context, and so this pops up:
> >
> > BUG: sleeping function called from invalid context at kernel/mutex.c:271
> > in_atomic(): 1, irqs_disabled(): 0, pid: 0, name: swapper/0
[...]
> > This patch fixes the issue by removing the mutex and making the logic
> > lock-free.
> >
> > Signed-off-by: Anton Vorontsov <anton.vorontsov@...aro.org>
>
> What guarantees that there's only one thread writing to struct
> vmevent_attr::value in vmevent_sample() now that the mutex is gone?
Well, it is called from the timer function, which has the same guaranties
as an interrupt handler: it can have only one execution thread (unlike
bare softirq handler), so we don't need to worry about racing w/
ourselves?
If you're concerned about several instances of timers accessing the
same vmevent_watch, I don't really see how it is possible, as we
allocate vmevent_watch together w/ the timer instance in vmevent_fd(),
so there is always one timer per vmevent_watch.
Thanks,
--
Anton Vorontsov
Email: cbouatmailru@...il.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists