lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <19f56d99-279a-5a8b-39a7-1017a3cb4bdd@redhat.com>
Date:   Wed, 12 Apr 2017 16:03:13 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>, linux-kernel@...r.kernel.org
Cc:     brouer@...hat.com
Subject: Re: [PATCH 1/3] ptr_ring: batch ring zeroing



On 2017年04月07日 13:49, Michael S. Tsirkin wrote:
> A known weakness in ptr_ring design is that it does not handle well the
> situation when ring is almost full: as entries are consumed they are
> immediately used again by the producer, so consumer and producer are
> writing to a shared cache line.
>
> To fix this, add batching to consume calls: as entries are
> consumed do not write NULL into the ring until we get
> a multiple (in current implementation 2x) of cache lines
> away from the producer. At that point, write them all out.
>
> We do the write out in the reverse order to keep
> producer from sharing cache with consumer for as long
> as possible.
>
> Writeout also triggers when ring wraps around - there's
> no special reason to do this but it helps keep the code
> a bit simpler.
>
> What should we do if getting away from producer by 2 cache lines
> would mean we are keeping the ring moe than half empty?
> Maybe we should reduce the batching in this case,
> current patch simply reduces the batching.
>
> Notes:
> - it is no longer true that a call to consume guarantees
>    that the following call to produce will succeed.
>    No users seem to assume that.
> - batching can also in theory reduce the signalling rate:
>    users that would previously send interrups to the producer
>    to wake it up after consuming each entry would now only
>    need to do this once in a batch.
>    Doing this would be easy by returning a flag to the caller.
>    No users seem to do signalling on consume yet so this was not
>    implemented yet.
>
> Signed-off-by: Michael S. Tsirkin<mst@...hat.com>
> ---
>
> Jason, I am curious whether the following gives you some of
> the performance boost that you see with vhost batching
> patches. Is vhost batching on top still helpful?

The patch looks good to me, will have a test for vhost batching patches.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ