lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 22 Aug 2012 11:37:19 +0800
From:	Bryan Wu <bryan.wu@...onical.com>
To:	"Kim, Milo" <Milo.Kim@...com>, Tejun Heo <tj@...nel.org>
Cc:	Richard Purdie <rpurdie@...ys.net>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-leds@...r.kernel.org" <linux-leds@...r.kernel.org>
Subject: Re: [PATCH 2/4] leds-lp5523: set the brightness to 0 forcely on
 removing the driver

On Tue, Aug 21, 2012 at 5:03 PM, Kim, Milo <Milo.Kim@...com> wrote:
> Turning off the brightness of each channel is required
> when removing the driver.
>
> So use flush_work_sync() rather than cancel_work_sync() to wait for
> unhandled brightness works.

Hmmm, I think we still should use cancel_work() here based on your
idea. Please find the patch from Tejun and add him to this loop
[PATCH 4/6] workqueue: deprecate flush[_delayed]_work_sync()
---
Before this patchset,

 flush_work()

        flush the last queued instance of the work item.  If it got
        queued on multple CPUs, it only considers the last queued
        instance.  The work item could still be executing on other
        CPUs and the flush might become noop if there are competing
        queueing operation on another CPU.

 flush_work_sync()

        flush_work() + wait for executing instances on all CPUs.  The
        flush_work() part may still become noop if there's competing
        queueing operation.

 cancel_work()

        Dequeue the work item if it's pending.  Doesn't care about
        whether it's executing or not.

 cancel_work_sync()

        cancel_work() + flush_work_sync().


After this patchset,

 flush_work()

        flush the work item.  Any queueing happened previously is
        guaranteed to have finished execution on return.  If no
        further queueing happened after flush started, the work item
        is guaranteed to be idle on return.

 cancel_work()

        Same as before.

 cancel_work_sync()

        cancel_work() followed by flush_work().  The same semantics as
        del_timer_sync().
---

cancel_work_sync() won't execute the work item at all just cancel
them, but flush will execute them and return.

-Bryan

>
> Signed-off-by: Milo(Woogyom) Kim <milo.kim@...com>
> ---
>  drivers/leds/leds-lp5523.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/leds/leds-lp5523.c b/drivers/leds/leds-lp5523.c
> index 9fd9a92..f090819 100644
> --- a/drivers/leds/leds-lp5523.c
> +++ b/drivers/leds/leds-lp5523.c
> @@ -974,7 +974,7 @@ static int __devinit lp5523_probe(struct i2c_client *client,
>  fail2:
>         for (i = 0; i < chip->num_leds; i++) {
>                 led_classdev_unregister(&chip->leds[i].cdev);
> -               cancel_work_sync(&chip->leds[i].brightness_work);
> +               flush_work_sync(&chip->leds[i].brightness_work);
>         }
>  fail1:
>         if (pdata->enable)
> @@ -993,7 +993,7 @@ static int lp5523_remove(struct i2c_client *client)
>
>         for (i = 0; i < chip->num_leds; i++) {
>                 led_classdev_unregister(&chip->leds[i].cdev);
> -               cancel_work_sync(&chip->leds[i].brightness_work);
> +               flush_work_sync(&chip->leds[i].brightness_work);
>         }
>
>         if (chip->pdata->enable)
> --
> 1.7.2.5
>
>
> Best Regards,
> Milo
>
>



-- 
Bryan Wu <bryan.wu@...onical.com>
Kernel Developer    +86.186-168-78255 Mobile
Canonical Ltd.      www.canonical.com
Ubuntu - Linux for human beings | www.ubuntu.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists