lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Sep 2021 03:04:23 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Laurent Vivier <lvivier@...hat.com>
Cc:     linux-kernel@...r.kernel.org,
        Alexander Potapenko <glider@...gle.com>,
        linux-crypto@...r.kernel.org, Dmitriy Vyukov <dvyukov@...gle.com>,
        rusty@...tcorp.com.au, amit@...nel.org, akong@...hat.com,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Matt Mackall <mpm@...enic.com>,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH 1/4] hwrng: virtio - add an internal buffer

On Thu, Sep 23, 2021 at 08:26:06AM +0200, Laurent Vivier wrote:
> On 22/09/2021 21:02, Michael S. Tsirkin wrote:
> > On Wed, Sep 22, 2021 at 07:09:00PM +0200, Laurent Vivier wrote:
> > > hwrng core uses two buffers that can be mixed in the
> > > virtio-rng queue.
> > > 
> > > If the buffer is provided with wait=0 it is enqueued in the
> > > virtio-rng queue but unused by the caller.
> > > On the next call, core provides another buffer but the
> > > first one is filled instead and the new one queued.
> > > And the caller reads the data from the new one that is not
> > > updated, and the data in the first one are lost.
> > > 
> > > To avoid this mix, virtio-rng needs to use its own unique
> > > internal buffer at a cost of a data copy to the caller buffer.
> > > 
> > > Signed-off-by: Laurent Vivier <lvivier@...hat.com>
> > > ---
> > >   drivers/char/hw_random/virtio-rng.c | 43 ++++++++++++++++++++++-------
> > >   1 file changed, 33 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
> > > index a90001e02bf7..208c547dcac1 100644
> > > --- a/drivers/char/hw_random/virtio-rng.c
> > > +++ b/drivers/char/hw_random/virtio-rng.c
> > > @@ -18,13 +18,20 @@ static DEFINE_IDA(rng_index_ida);
> > >   struct virtrng_info {
> > >   	struct hwrng hwrng;
> > >   	struct virtqueue *vq;
> > > -	struct completion have_data;
> > >   	char name[25];
> > > -	unsigned int data_avail;
> > >   	int index;
> > >   	bool busy;
> > >   	bool hwrng_register_done;
> > >   	bool hwrng_removed;
> > > +	/* data transfer */
> > > +	struct completion have_data;
> > > +	unsigned int data_avail;
> > > +	/* minimal size returned by rng_buffer_size() */
> > > +#if SMP_CACHE_BYTES < 32
> > > +	u8 data[32];
> > > +#else
> > > +	u8 data[SMP_CACHE_BYTES];
> > > +#endif
> > 
> > Let's move this logic to a macro in hw_random.h ?
> > 
> > >   };
> > >   static void random_recv_done(struct virtqueue *vq)
> > > @@ -39,14 +46,14 @@ static void random_recv_done(struct virtqueue *vq)
> > >   }
> > >   /* The host will fill any buffer we give it with sweet, sweet randomness. */
> > > -static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size)
> > > +static void register_buffer(struct virtrng_info *vi)
> > >   {
> > >   	struct scatterlist sg;
> > > -	sg_init_one(&sg, buf, size);
> > > +	sg_init_one(&sg, vi->data, sizeof(vi->data));
> > 
> > Note that add_early_randomness requests less:
> >          size_t size = min_t(size_t, 16, rng_buffer_size());
> > 
> > maybe track how much was requested and grow up to sizeof(data)?
> 
> I think this problem is managed by PATCH 3/4 as we reuse unused data of the buffer.

the issue I'm pointing out is that we are requesting too much
entropy from host - more than guest needs.

> > 
> > >   	/* There should always be room for one buffer. */
> > > -	virtqueue_add_inbuf(vi->vq, &sg, 1, buf, GFP_KERNEL);
> > > +	virtqueue_add_inbuf(vi->vq, &sg, 1, vi->data, GFP_KERNEL);
> > 
> > 
> > BTW no longer true if DMA API is in use ... not easy to fix,
> > I think some changes to virtio API to allow pre-mapping
> > s/g for DMA might be needed ...
> 
> Is there something I can do here?
> 
> Thanks,
> Laurent

We can let it be for now, but down the road I think we should
support a way to pre-map memory for DMA.

-- 
MST

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ