lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 3 Apr 2019 10:04:16 -0700
From:   Brian Norris <briannorris@...omium.org>
To:     Douglas Anderson <dianders@...omium.org>
Cc:     Benson Leung <bleung@...omium.org>,
        Enric Balletbo i Serra <enric.balletbo@...labora.com>,
        Alexandru M Stan <amstan@...omium.org>,
        "open list:ARM/Rockchip SoC..." <linux-rockchip@...ts.infradead.org>,
        Simon Glass <sjg@...omium.org>,
        Guenter Roeck <groeck@...omium.org>,
        Mark Brown <broonie@...nel.org>, ryandcase@...omium.org,
        rspangler@...omium.org, Matthias Kaehlcke <mka@...omium.org>,
        Heiko Stuebner <heiko@...ech.de>,
        Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] platform/chrome: cros_ec_spi: Transfer messages at
 high priority

I know some of this was hashed out last night, but I wasn't reading my
email then to interject ;)

On Wed, Apr 3, 2019 at 9:05 AM Douglas Anderson <dianders@...omium.org> wrote:
> +static int cros_ec_xfer_high_pri(struct cros_ec_device *ec_dev,
> +                                struct cros_ec_command *ec_msg,
> +                                cros_ec_xfer_fn_t fn)
> +{
> +       struct cros_ec_xfer_work_params params;
> +
> +       INIT_WORK(&params.work, cros_ec_xfer_high_pri_work);
> +       params.ec_dev = ec_dev;
> +       params.ec_msg = ec_msg;
> +       params.fn = fn;
> +       init_completion(&params.completion);
> +
> +       /*
> +        * This looks a bit ridiculous.  Why do the work on a
> +        * different thread if we're just going to block waiting for
> +        * the thread to finish?  The key here is that the thread is
> +        * running at high priority but the calling context might not
> +        * be.  We need to be at high priority to avoid getting
> +        * context switched out for too long and the EC giving up on
> +        * the transfer.
> +        */
> +       queue_work(system_highpri_wq, &params.work);

Does anybody know what the definition of "too long" is for the phrase
"Don't queue works which can run for too long" in the documentation?

> +       wait_for_completion(&params.completion);

I think flush_workqueue() was discussed and rejected, but what about
flush_work()? Then you don't have to worry about the rest of the
contents of the workqueue -- just your own work--and I think you could
avoid the 'completion'.

You might also have a tiny race in the current implementation, since
(a) you can't queue the same work item twice and
(b) technically, the complete() call is still while the work item is
running -- you don't really guarantee the work item has finished
before you continue.
So the combination of (a) and (b) means that moving from one xfer to
the next, you might not successfully queue your work at all. You could
probably test this by checking the return value of queue_work() under
a heavy EC workload. Avoiding the completion would also avoid this
race.

Brian

> +
> +       return params.ret;
> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ