[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABE8wws27XooqGaZCbiqhykP1jP99eUE0=DKV6Lc2sha0sWK0A@mail.gmail.com>
Date: Thu, 12 Jul 2012 09:41:02 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Li Zhong <zhong@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, arjan@...ux.intel.com,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Christian Kujau <lists@...dbynature.de>,
Cong Wang <xiyou.wangcong@...il.com>, JBottomley@...allels.com
Subject: Re: [PATCH RESEND] Fix a dead loop in async_synchronize_full()
[ adding James ]
On Thu, Jul 12, 2012 at 2:56 AM, Li Zhong <zhong@...ux.vnet.ibm.com> wrote:
> I have tested your pending patches, they fix the problem here.
Thanks!
James, if you get the chance please add:
Tested-by: Li Zhong <zhong@...ux.vnet.ibm.com>
...to the pending set, or I can just resend. Let me know.
> But with ASYNC_DOMAIN_EXCLUSIVE added for the domains defined on the
> stack, I think we lack a function that could wait for all the works in
> all domains (however, maybe actually we don't need such an interface).
>
> Also, I think it's not good to exclude them from
> async_synchronize_full() just because they are defined on the stack.
ASYNC_DOMAIN can be used to allow on-stack domains to be flushed via
async_synchronize_full(). However, if you know ahead of time that
your work items do not need to be anonymously flushed and you know the
lifetime of your domain ASYNC_DOMAIN_EXCLUSIVE +
async_synchronize_full_domain() are there to prevent unnecessary
entanglements.
If for some reason you want to have temporary on stack domains be
globally visible I included an async_unregister_domain() routine to
make the api complete. It has a comment that using
ASYNC_DOMAIN_EXCLUSIVE for such domains is preferred.
--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists