lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <369186187.14365871.1506407817157.JavaMail.zimbra@redhat.com>
Date:   Tue, 26 Sep 2017 02:36:57 -0400 (EDT)
From:   Pankaj Gupta <pagupta@...hat.com>
To:     Dmitry Torokhov <dmitry.torokhov@...il.com>
Cc:     Amos Kong <akong@...hat.com>, linux-crypto@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Rusty Russell <rusty@...tcorp.com.au>, kvm@...r.kernel.org,
        Michael Buesch <m@...s.ch>, Matt Mackall <mpm@...enic.com>,
        amit shah <amit.shah@...hat.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 REPOST 1/6] hw_random: place mutex around read
 functions and buffers.


> 
> A bit late to a party, but:
> 
> On Mon, Dec 8, 2014 at 12:50 AM, Amos Kong <akong@...hat.com> wrote:
> > From: Rusty Russell <rusty@...tcorp.com.au>
> >
> > There's currently a big lock around everything, and it means that we
> > can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
> > while the rng is reading.  This is a real problem when the rng is slow,
> > or blocked (eg. virtio_rng with qemu's default /dev/random backend)
> >
> > This doesn't help (it leaves the current lock untouched), just adds a
> > lock to protect the read function and the static buffers, in preparation
> > for transition.
> >
> > Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
> > ---
> ...
> >
> > @@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp, char
> > __user *buf,
> >                         goto out_unlock;
> >                 }
> >
> > +               mutex_lock(&reading_mutex);
> 
> I think this breaks O_NONBLOCK: we have hwrng core thread that is
> constantly pumps underlying rng for data; the thread takes the mutex
> and calls rng_get_data() that blocks until RNG responds. This means
> that even user specified O_NONBLOCK here we'll be waiting until
> [hwrng] thread releases reading_mutex before we can continue.

I think for 'virtio_rng' for 'O_NON_BLOCK' 'rng_get_data' returns
without waiting for data which can let mutex to be  used by other 
threads waiting if any?

rng_dev_read
  rng_get_data
    virtio_read
  
static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
{
        int ret;
        struct virtrng_info *vi = (struct virtrng_info *)rng->priv;

        if (vi->hwrng_removed)
                return -ENODEV;

        if (!vi->busy) {
                vi->busy = true;
                init_completion(&vi->have_data);
                register_buffer(vi, buf, size);
        }

        if (!wait)
                return 0;

        ret = wait_for_completion_killable(&vi->have_data);
        if (ret < 0)
                return ret;

        vi->busy = false;

        return vi->data_avail;
}

> 
> >                 if (!data_avail) {
> >                         bytes_read = rng_get_data(current_rng, rng_buffer,
> >                                 rng_buffer_size(),
> >                                 !(filp->f_flags & O_NONBLOCK));
> >                         if (bytes_read < 0) {
> >                                 err = bytes_read;
> > -                               goto out_unlock;
> > +                               goto out_unlock_reading;
> >                         }
> >                         data_avail = bytes_read;
> >                 }
> 
> Thanks.
> 
> --
> Dmitry
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ