lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170926165241.GB14833@dtor-ws>
Date:   Tue, 26 Sep 2017 09:52:41 -0700
From:   Dmitry Torokhov <dmitry.torokhov@...il.com>
To:     Pankaj Gupta <pagupta@...hat.com>
Cc:     Amos Kong <akong@...hat.com>, linux-crypto@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Rusty Russell <rusty@...tcorp.com.au>, kvm@...r.kernel.org,
        Michael Buesch <m@...s.ch>, Matt Mackall <mpm@...enic.com>,
        amit shah <amit.shah@...hat.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 REPOST 1/6] hw_random: place mutex around read
 functions and buffers.

On Tue, Sep 26, 2017 at 02:36:57AM -0400, Pankaj Gupta wrote:
> 
> > 
> > A bit late to a party, but:
> > 
> > On Mon, Dec 8, 2014 at 12:50 AM, Amos Kong <akong@...hat.com> wrote:
> > > From: Rusty Russell <rusty@...tcorp.com.au>
> > >
> > > There's currently a big lock around everything, and it means that we
> > > can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
> > > while the rng is reading.  This is a real problem when the rng is slow,
> > > or blocked (eg. virtio_rng with qemu's default /dev/random backend)
> > >
> > > This doesn't help (it leaves the current lock untouched), just adds a
> > > lock to protect the read function and the static buffers, in preparation
> > > for transition.
> > >
> > > Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
> > > ---
> > ...
> > >
> > > @@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp, char
> > > __user *buf,
> > >                         goto out_unlock;
> > >                 }
> > >
> > > +               mutex_lock(&reading_mutex);
> > 
> > I think this breaks O_NONBLOCK: we have hwrng core thread that is
> > constantly pumps underlying rng for data; the thread takes the mutex
> > and calls rng_get_data() that blocks until RNG responds. This means
> > that even user specified O_NONBLOCK here we'll be waiting until
> > [hwrng] thread releases reading_mutex before we can continue.
> 
> I think for 'virtio_rng' for 'O_NON_BLOCK' 'rng_get_data' returns
> without waiting for data which can let mutex to be  used by other 
> threads waiting if any?
> 
> rng_dev_read
>   rng_get_data
>     virtio_read

As I said in the paragraph above the code that potentially holds the
mutex for long time is the thread in hwrng core: hwrng_fillfn(). As it
calls rng_get_data() with "wait" argument == 1 it may block while
holding reading_mutex, which, in turn, will block rng_dev_read(), even
if it was called with O_NONBLOCK.

Thanks.

-- 
Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ