[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211005075433-mutt-send-email-mst@kernel.org>
Date: Tue, 5 Oct 2021 07:55:02 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Laurent Vivier <lvivier@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>,
linux-crypto@...r.kernel.org, Dmitriy Vyukov <dvyukov@...gle.com>,
rusty@...tcorp.com.au, amit@...nel.org, akong@...hat.com,
Herbert Xu <herbert@...dor.apana.org.au>,
Matt Mackall <mpm@...enic.com>,
virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH 1/4] hwrng: virtio - add an internal buffer
On Thu, Sep 23, 2021 at 09:34:18AM +0200, Laurent Vivier wrote:
> On 23/09/2021 09:04, Michael S. Tsirkin wrote:
> > On Thu, Sep 23, 2021 at 08:26:06AM +0200, Laurent Vivier wrote:
> > > On 22/09/2021 21:02, Michael S. Tsirkin wrote:
> > > > On Wed, Sep 22, 2021 at 07:09:00PM +0200, Laurent Vivier wrote:
> > > > > hwrng core uses two buffers that can be mixed in the
> > > > > virtio-rng queue.
> > > > >
> > > > > If the buffer is provided with wait=0 it is enqueued in the
> > > > > virtio-rng queue but unused by the caller.
> > > > > On the next call, core provides another buffer but the
> > > > > first one is filled instead and the new one queued.
> > > > > And the caller reads the data from the new one that is not
> > > > > updated, and the data in the first one are lost.
> > > > >
> > > > > To avoid this mix, virtio-rng needs to use its own unique
> > > > > internal buffer at a cost of a data copy to the caller buffer.
> > > > >
> > > > > Signed-off-by: Laurent Vivier <lvivier@...hat.com>
> > > > > ---
> > > > > drivers/char/hw_random/virtio-rng.c | 43 ++++++++++++++++++++++-------
> > > > > 1 file changed, 33 insertions(+), 10 deletions(-)
> > > > >
> > > > > diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
> > > > > index a90001e02bf7..208c547dcac1 100644
> > > > > --- a/drivers/char/hw_random/virtio-rng.c
> > > > > +++ b/drivers/char/hw_random/virtio-rng.c
> > > > > @@ -18,13 +18,20 @@ static DEFINE_IDA(rng_index_ida);
> > > > > struct virtrng_info {
> > > > > struct hwrng hwrng;
> > > > > struct virtqueue *vq;
> > > > > - struct completion have_data;
> > > > > char name[25];
> > > > > - unsigned int data_avail;
> > > > > int index;
> > > > > bool busy;
> > > > > bool hwrng_register_done;
> > > > > bool hwrng_removed;
> > > > > + /* data transfer */
> > > > > + struct completion have_data;
> > > > > + unsigned int data_avail;
> > > > > + /* minimal size returned by rng_buffer_size() */
> > > > > +#if SMP_CACHE_BYTES < 32
> > > > > + u8 data[32];
> > > > > +#else
> > > > > + u8 data[SMP_CACHE_BYTES];
> > > > > +#endif
> > > >
> > > > Let's move this logic to a macro in hw_random.h ?
> > > >
> > > > > };
> > > > > static void random_recv_done(struct virtqueue *vq)
> > > > > @@ -39,14 +46,14 @@ static void random_recv_done(struct virtqueue *vq)
> > > > > }
> > > > > /* The host will fill any buffer we give it with sweet, sweet randomness. */
> > > > > -static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size)
> > > > > +static void register_buffer(struct virtrng_info *vi)
> > > > > {
> > > > > struct scatterlist sg;
> > > > > - sg_init_one(&sg, buf, size);
> > > > > + sg_init_one(&sg, vi->data, sizeof(vi->data));
> > > >
> > > > Note that add_early_randomness requests less:
> > > > size_t size = min_t(size_t, 16, rng_buffer_size());
> > > >
> > > > maybe track how much was requested and grow up to sizeof(data)?
> > >
> > > I think this problem is managed by PATCH 3/4 as we reuse unused data of the buffer.
> >
> > the issue I'm pointing out is that we are requesting too much
> > entropy from host - more than guest needs.
>
> Yes, guest asks for 16 bytes, but we request SMP_CACHE_BYTES (64 on x86_64),
> and these 16 bytes are used with add_device_randomness(). With the following
> patches, the remaining 48 bytes are used rapidly by hwgnd kthread or by the
> next virtio_read.
>
> If there is no enough entropy the call is simply ignored as wait=0.
>
> At this patch level the call is always simply ignored (because wait=0) and
> the data requested here are used by the next read that always asks for a
> SMP_CACHE_BYTES bytes data size.
>
> Moreover in PATCH 4/4 we always have a pending request of size
> SMP_CACHE_BYTES, so driver always asks a block of this size and the guest
> takes what it needs.
>
> Originally I used a 16 bytes block but performance are divided by 4.
>
> Do you propose something else?
>
> Thanks,
> Laurent
Maybe min(size, sizeof(vi->data))?
--
MST
Powered by blists - more mailing lists