lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Jun 2009 10:17:40 -0400
From:	Gregory Haskins <ghaskins@...ell.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	kvm@...r.kernel.org, linux-kernel@...r.kernel.org, avi@...hat.com,
	davidel@...ilserver.org, paulmck@...ux.vnet.ibm.com
Subject: Re: [KVM-RFC PATCH 1/2] eventfd: add an explicit srcu based	notifier
 interface

Michael S. Tsirkin wrote:
> On Mon, Jun 15, 2009 at 10:29:56PM -0400, Gregory Haskins wrote:
>   
>> irqfd and its underlying implementation, eventfd, currently utilize
>> the embedded wait-queue in eventfd for signal notification.  The nice thing
>> about this design decision is that it re-uses the existing
>> eventfd/wait-queue code and it generally works well....with several
>> limitations.
>>
>> One of the limitations is that notification callbacks are always called
>> inside a spin_lock_irqsave critical section.  Another limitation is
>> that it is very difficult to build a system that can recieve release
>> notification without being racy.
>>
>> Therefore, we introduce a new registration interface that is SRCU based
>> instead of wait-queue based, and implement the internal wait-queue
>> infrastructure in terms of this new interface.  We then convert irqfd
>> to use this new interface instead of the existing wait-queue code.
>>
>> The end result is that we now have the opportunity to run the interrupt
>> injection code serially to the callback (when the signal is raised from
>> process-context, at least) instead of always deferring the injection to a
>> work-queue.
>>
>> Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
>> CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>> CC: Davide Libenzi <davidel@...ilserver.org>
>> ---
>>
>>  fs/eventfd.c            |  115 +++++++++++++++++++++++++++++++++++++++++++----
>>  include/linux/eventfd.h |   30 ++++++++++++
>>  virt/kvm/eventfd.c      |  114 +++++++++++++++++++++--------------------------
>>  3 files changed, 188 insertions(+), 71 deletions(-)
>>
>> diff --git a/fs/eventfd.c b/fs/eventfd.c
>> index 72f5f8d..505d5de 100644
>> --- a/fs/eventfd.c
>> +++ b/fs/eventfd.c
>> @@ -30,8 +30,47 @@ struct eventfd_ctx {
>>  	 */
>>  	__u64 count;
>>  	unsigned int flags;
>> +	struct srcu_struct srcu;
>> +	struct list_head nh;
>> +	struct eventfd_notifier notifier;
>>  };
>>  
>> +static void _eventfd_wqh_notify(struct eventfd_notifier *en)
>> +{
>> +	struct eventfd_ctx *ctx = container_of(en,
>> +					       struct eventfd_ctx,
>> +					       notifier);
>> +
>> +	if (waitqueue_active(&ctx->wqh))
>> +		wake_up_poll(&ctx->wqh, POLLIN);
>> +}
>> +
>> +static void _eventfd_notify(struct eventfd_ctx *ctx)
>> +{
>> +	struct eventfd_notifier *en;
>> +	int idx;
>> +
>> +	idx = srcu_read_lock(&ctx->srcu);
>> +
>> +	/*
>> +	 * The goal here is to allow the notification to be preemptible
>> +	 * as often as possible.  We cannot achieve this with the basic
>> +	 * wqh mechanism because it requires the wqh->lock.  Therefore
>> +	 * we have an internal srcu list mechanism of which the wqh is
>> +	 * a client.
>> +	 *
>> +	 * Not all paths will invoke this function in process context.
>> +	 * Callers should check for suitable state before assuming they
>> +	 * can sleep (such as with preemptible()).  Paul McKenney assures
>> +	 * me that srcu_read_lock is compatible with in-atomic, as long as
>> +	 * the code within the critical section is also compatible.
>> +	 */
>> +	list_for_each_entry_rcu(en, &ctx->nh, list)
>> +		en->ops->signal(en);
>> +
>> +	srcu_read_unlock(&ctx->srcu, idx);
>> +}
>> +
>>  /*
>>   * Adds "n" to the eventfd counter "count". Returns "n" in case of
>>   * success, or a value lower then "n" in case of coutner overflow.
>>     
>
> This is ugly, isn't it? With CONFIG_PREEMPT=no preemptible() is always false.
>   

As an aside, this is something I would like to address.  I keep running
into this pattern where I could do something in-line if I had a
"can_sleep()" context.  Otherwise, I have to punt to something like a
workqueue which adds latency.  The closest thing I have to "can_sleep()"
is preemptible(), which, as you correctly pointed out is limited to only
working with CONFIG_PREEMPT=y.

Its been a while since I looked into it, but one of the barriers that
would need to be overcome is the fact that the preempt_count stuff gets
compiled away with CONFIG_PREEMPT=n.  It is conceivable that we could
make the preempt_count logic an independent config variable from
CONFIG_PREEMPT to provide a can_sleep() macro without requiring
full-blow preemption to be enabled.  So my questions would be as follows:

a) Is the community conducive to such an idea?
b) Are there other things to consider/fix besides the lack of
preempt_count in order to implement such a beast?

-Greg



Download attachment "signature.asc" of type "application/pgp-signature" (267 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ