lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 May 2009 17:44:58 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	tom.leiming@...il.com
Cc:	arjan@...radead.org, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org
Subject: Re: [PATCH] kernel/async.c:introduce async_schedule*_atomic

On Tue, May 12, 2009 at 11:13:42PM +0800, tom.leiming@...il.com wrote:
> From: Ming Lei <tom.leiming@...il.com>
> 
> The async_schedule* may not be called in atomic contexts if out of
> memory or if there's too much work pending already, because the
> async function to be called may sleep.
> 
> This patch fixes the comment of async_schedule*, and introduces
> async_schedules*_atomic to allow them called from atomic contexts
> safely.



Aah, great. Such helper could easily replace some workqueues
which receive (in atomic) rare jobs but still need to exist because
they execute jobs which take too much time to be enqueued in kevents.

A good candidate: kpsmoused!


 
> Signed-off-by: Ming Lei <tom.leiming@...il.com>
> ---
>  include/linux/async.h |    3 ++
>  kernel/async.c        |   56 ++++++++++++++++++++++++++++++++++++++++++------
>  2 files changed, 52 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/async.h b/include/linux/async.h
> index 68a9530..ede9849 100644
> --- a/include/linux/async.h
> +++ b/include/linux/async.h
> @@ -19,6 +19,9 @@ typedef void (async_func_ptr) (void *data, async_cookie_t cookie);
>  extern async_cookie_t async_schedule(async_func_ptr *ptr, void *data);
>  extern async_cookie_t async_schedule_domain(async_func_ptr *ptr, void *data,
>  					    struct list_head *list);
> +extern async_cookie_t async_schedule_atomic(async_func_ptr *ptr, void *data);
> +extern async_cookie_t async_schedule_domain_atomic(async_func_ptr *ptr, \


trailing backslash?


> +					void *data, struct list_head *list);
>  extern void async_synchronize_full(void);
>  extern void async_synchronize_full_domain(struct list_head *list);
>  extern void async_synchronize_cookie(async_cookie_t cookie);
> diff --git a/kernel/async.c b/kernel/async.c
> index 968ef94..6bf565b 100644
> --- a/kernel/async.c
> +++ b/kernel/async.c
> @@ -172,12 +172,13 @@ out:
>  }
>  
>  
> -static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct list_head *running)
> +static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, \


another one.


> +				struct list_head *running, int atomic)
>  {
>  	struct async_entry *entry;
>  	unsigned long flags;
>  	async_cookie_t newcookie;
> -	
> +	int  sync_run = 0;
>  
>  	/* allow irq-off callers */
>  	entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC);
> @@ -186,7 +187,9 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
>  	 * If we're out of memory or if there's too much work
>  	 * pending already, we execute synchronously.
>  	 */
> -	if (!async_enabled || !entry || atomic_read(&entry_count) > MAX_WORK) {
> +	sync_run = !async_enabled || !entry || \
> +			atomic_read(&entry_count) > MAX_WORK;
> +	if (sync_run && !atomic) {
>  		kfree(entry);
>  		spin_lock_irqsave(&async_lock, flags);
>  		newcookie = next_cookie++;
> @@ -195,7 +198,10 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
>  		/* low on memory.. run synchronously */
>  		ptr(data, newcookie);
>  		return newcookie;
> +	} else if (sync_run) {
> +		return 0;


Then it's up to the caller to handle the error. I guess it's the best
way to do.
You could put it in a separate list and retry later, but it wouldn't
be a good idea because the work submitter couldn't be sure of the end
result.

So I guess you did a right choice.



>  	}
> +
>  	entry->func = ptr;
>  	entry->data = data;
>  	entry->running = running;
> @@ -215,15 +221,31 @@ static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct l
>   * @data: data pointer to pass to the function
>   *
>   * Returns an async_cookie_t that may be used for checkpointing later.
> - * Note: This function may be called from atomic or non-atomic contexts.
> + * Note:This function may be called from non-atomic contexts,and not
> + * 	called from atomic contexts with safety. Please use
> + * 	async_schedule_atomic in atomic contexts.
>   */
>  async_cookie_t async_schedule(async_func_ptr *ptr, void *data)
>  {
> -	return __async_schedule(ptr, data, &async_running);
> +	return __async_schedule(ptr, data, &async_running, 0);
>  }
>  EXPORT_SYMBOL_GPL(async_schedule);
>  
>  /**
> + * async_schedule_atomic - schedule a function for asynchronous execution
> + * @ptr: function to execute asynchronously
> + * @data: data pointer to pass to the function
> + *
> + * Returns an async_cookie_t that may be used for checkpointing later.
> + * Note: This function can be called from atomic contexts safely.
> + */



Please add a comment to tell that it might fail and return 0
in that case.

May be it would even be worth to detail the error. A cookie
can't be negative, then you can use the common error pattern.

-ENOMEM while running out of memory
-ENOSPC when the threshold number of async work has been overlapped
....

Not sure such precision about the error path is needed though, it's just
an idea.


> +async_cookie_t async_schedule_atomic(async_func_ptr *ptr, void *data)
> +{
> +	return __async_schedule(ptr, data, &async_running, 1);
> +}
> +EXPORT_SYMBOL_GPL(async_schedule_atomic);
> +
> +/**
>   * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
>   * @ptr: function to execute asynchronously
>   * @data: data pointer to pass to the function
> @@ -233,16 +255,36 @@ EXPORT_SYMBOL_GPL(async_schedule);
>   * @running may be used in the async_synchronize_*_domain() functions
>   * to wait within a certain synchronization domain rather than globally.
>   * A synchronization domain is specified via the running queue @running to use.
> - * Note: This function may be called from atomic or non-atomic contexts.
> + * Note:This function may be called from non-atomic contexts,and not
> + * 	called from atomic contexts with safety. Please use
> + * 	async_schedule_domain_atomic in atomic contexts.
>   */
>  async_cookie_t async_schedule_domain(async_func_ptr *ptr, void *data,
>  				     struct list_head *running)
>  {
> -	return __async_schedule(ptr, data, running);
> +	return __async_schedule(ptr, data, running, 0);
>  }
>  EXPORT_SYMBOL_GPL(async_schedule_domain);
>  
>  /**
> + * async_schedule_domain_atomic - schedule a function for asynchronous execution within a certain domain
> + * @ptr: function to execute asynchronously
> + * @data: data pointer to pass to the function
> + * @running: running list for the domain
> + *
> + * Returns an async_cookie_t that may be used for checkpointing later.
> + * @running may be used in the async_synchronize_*_domain() functions
> + * to wait within a certain synchronization domain rather than globally.
> + * A synchronization domain is specified via the running queue @running to use.
> + * Note: This function can be called from atomic contexts safely.
> + */
> +async_cookie_t async_schedule_domain_atomic(async_func_ptr *ptr, void *data,
> +				     struct list_head *running)
> +{
> +	return __async_schedule(ptr, data, running, 1);
> +}
> +EXPORT_SYMBOL_GPL(async_schedule_domain_atomic);
> +/**
>   * async_synchronize_full - synchronize all asynchronous function calls
>   *
>   * This function waits until all asynchronous function calls have been done.
> -- 
> 1.6.0.GIT
> 


Otherwise it looks good, and IMHO it is needed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ