lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 27 Sep 2017 02:35:25 -0400 (EDT)
From:   Pankaj Gupta <pagupta@...hat.com>
To:     Dmitry Torokhov <dmitry.torokhov@...il.com>
Cc:     Amos Kong <akong@...hat.com>, linux-crypto@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Rusty Russell <rusty@...tcorp.com.au>, kvm@...r.kernel.org,
        Michael Buesch <m@...s.ch>, Matt Mackall <mpm@...enic.com>,
        amit shah <amit.shah@...hat.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 REPOST 1/6] hw_random: place mutex around read
 functions and buffers.


> 
> On Tue, Sep 26, 2017 at 02:36:57AM -0400, Pankaj Gupta wrote:
> > 
> > > 
> > > A bit late to a party, but:
> > > 
> > > On Mon, Dec 8, 2014 at 12:50 AM, Amos Kong <akong@...hat.com> wrote:
> > > > From: Rusty Russell <rusty@...tcorp.com.au>
> > > >
> > > > There's currently a big lock around everything, and it means that we
> > > > can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
> > > > while the rng is reading.  This is a real problem when the rng is slow,
> > > > or blocked (eg. virtio_rng with qemu's default /dev/random backend)
> > > >
> > > > This doesn't help (it leaves the current lock untouched), just adds a
> > > > lock to protect the read function and the static buffers, in
> > > > preparation
> > > > for transition.
> > > >
> > > > Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
> > > > ---
> > > ...
> > > >
> > > > @@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp,
> > > > char
> > > > __user *buf,
> > > >                         goto out_unlock;
> > > >                 }
> > > >
> > > > +               mutex_lock(&reading_mutex);
> > > 
> > > I think this breaks O_NONBLOCK: we have hwrng core thread that is
> > > constantly pumps underlying rng for data; the thread takes the mutex
> > > and calls rng_get_data() that blocks until RNG responds. This means
> > > that even user specified O_NONBLOCK here we'll be waiting until
> > > [hwrng] thread releases reading_mutex before we can continue.
> > 
> > I think for 'virtio_rng' for 'O_NON_BLOCK' 'rng_get_data' returns
> > without waiting for data which can let mutex to be  used by other
> > threads waiting if any?
> > 
> > rng_dev_read
> >   rng_get_data
> >     virtio_read
> 
> As I said in the paragraph above the code that potentially holds the
> mutex for long time is the thread in hwrng core: hwrng_fillfn(). As it
> calls rng_get_data() with "wait" argument == 1 it may block while
> holding reading_mutex, which, in turn, will block rng_dev_read(), even
> if it was called with O_NONBLOCK.

yes, 'hwrng_fillfn' does not consider O_NONBLOCK and can result in mutex wait
for other tasks. What if we pass zero for wait to 'hwrng_fill' to return early 
if there is no data?

--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -403,7 +403,7 @@ static int hwrng_fillfn(void *unused)
                        break;
                mutex_lock(&reading_mutex);
                rc = rng_get_data(rng, rng_fillbuf,
-                                 rng_buffer_size(), 1);
+                                 rng_buffer_size(), 0);
                mutex_unlock(&reading_mutex);
                put_rng(rng);
                if (rc <= 0) {

Thanks,
Pankaj

> 
> Thanks.
> 
> --
> Dmitry
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ