lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190409085607-mutt-send-email-mst@kernel.org>
Date:   Tue, 9 Apr 2019 09:14:16 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Christoph Hellwig <hch@...radead.org>,
        James Bottomley <James.Bottomley@...senpartnership.com>,
        Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH net] vhost: flush dcache page when logging dirty pages

On Tue, Apr 09, 2019 at 12:16:47PM +0800, Jason Wang wrote:
> We set dirty bit through setting up kmaps and access them through
> kernel virtual address, this may result alias in virtually tagged
> caches that require a dcache flush afterwards.
> 
> Cc: Christoph Hellwig <hch@...radead.org>
> Cc: James Bottomley <James.Bottomley@...senPartnership.com>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")

This is like saying "everyone with vhost needs this".
In practice only might affect some architectures.
Which ones? You want to Cc the relevant maintainers
who understand this...

> Signed-off-by: Jason Wang <jasowang@...hat.com>

I am not sure this is a good idea.
The region in question is supposed to be accessed
by userspace at the same time, through atomic operations.

How do we know userspace didn't access it just before?

Is that an issue at all given we use
atomics for access? Documentation/core-api/cachetlb.rst does
not mention atomics.
Which architectures are affected?
Assuming atomics actually do need a flush, then don't we need
a flush in the other direction too? How are atomics
supposed to work at all?


I really think we need new APIs along the lines of
set_bit_to_user.

> ---
>  drivers/vhost/vhost.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 351af88231ad..34a1cedbc5ba 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr)
>  	base = kmap_atomic(page);
>  	set_bit(bit, base);
>  	kunmap_atomic(base);
> +	flush_dcache_page(page);
>  	set_page_dirty_lock(page);
>  	put_page(page);
>  	return 0;

Ignoring the question of whether this actually helps, I doubt
flush_dcache_page is appropriate here.  Pls take a look at
Documentation/core-api/cachetlb.rst as well as the actual
implementation.

I think you meant flush_kernel_dcache_page, and IIUC it must happen
before kunmap, not after (which you still have the va locked).

> -- 
> 2.19.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ