[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5581385D.9060608@bmw-carit.de>
Date: Wed, 17 Jun 2015 11:05:33 +0200
From: Daniel Wagner <daniel.wagner@...-carit.de>
To: Alexei Starovoitov <ast@...mgrid.com>, <paulmck@...ux.vnet.ibm.com>
CC: LKML <linux-kernel@...r.kernel.org>, <rostedt@...dmis.org>
Subject: Re: call_rcu from trace_preempt
On 06/17/2015 10:11 AM, Daniel Wagner wrote:
> On 06/16/2015 07:20 PM, Alexei Starovoitov wrote:
>> On 6/16/15 5:38 AM, Daniel Wagner wrote:
>>> static int free_thread(void *arg)
>>> +{
>>> + unsigned long flags;
>>> + struct htab_elem *l;
>>> +
>>> + while (!kthread_should_stop()) {
>>> + spin_lock_irqsave(&elem_freelist_lock, flags);
>>> + while (!list_empty(&elem_freelist)) {
>>> + l = list_entry(elem_freelist.next,
>>> + struct htab_elem, list);
>>> + list_del(&l->list);
>>> + kfree(l);
>>
> Anyway, I changed to above kfree() to a kfree_rcu() and it explodes
> again. With the same stack trace we seen.
Correction. I did this without the is_rcu_watching() change. With that
patch applied it works fine again.
> Steven's suggestion deferring the work via irq_work results in the same
> stack trace. (Now I get cold feets, without the nice heat from the CPU
> busy looping...)
That one still not working. It also makes the system really really slow.
I guess I still do something completely wrong.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists