lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEs6cW7LdpCdWQnX4Pif2gGOu=f3bjNeYQ6MVcdQe=X--Q@mail.gmail.com>
Date:   Tue, 14 Mar 2023 11:56:08 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Stefano Garzarella <sgarzare@...hat.com>
Cc:     virtualization@...ts.linux-foundation.org,
        Andrey Zhadchenko <andrey.zhadchenko@...tuozzo.com>,
        eperezma@...hat.com, netdev@...r.kernel.org, stefanha@...hat.com,
        linux-kernel@...r.kernel.org,
        "Michael S. Tsirkin" <mst@...hat.com>, kvm@...r.kernel.org
Subject: Re: [PATCH v2 3/8] vringh: replace kmap_atomic() with kmap_local_page()

On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@...hat.com> wrote:
>
> kmap_atomic() is deprecated in favor of kmap_local_page().

It's better to mention the commit or code that introduces this.

>
> With kmap_local_page() the mappings are per thread, CPU local, can take
> page-faults, and can be called from any context (including interrupts).
> Furthermore, the tasks can be preempted and, when they are scheduled to
> run again, the kernel virtual addresses are restored and still valid.
>
> kmap_atomic() is implemented like a kmap_local_page() which also disables
> page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> otherwise it only disables migration).
>
> The code within the mappings/un-mappings in getu16_iotlb() and
> putu16_iotlb() don't depend on the above-mentioned side effects of
> kmap_atomic(),

Note we used to use spinlock to protect simulators (at least until
patch 7, so we probably need to re-order the patches at least) so I
think this is only valid when:

The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
interrupts).

If yes, should we document this? (Or should we introduce a boolean to
say whether an IOTLB variant can be used in an atomic context)?

Thanks

> so that mere replacements of the old API with the new one
> is all that is required (i.e., there is no need to explicitly add calls
> to pagefault_disable() and/or preempt_disable()).
>
> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
> ---
>
> Notes:
>     v2:
>     - added this patch since checkpatch.pl complained about deprecation
>       of kmap_atomic() touched by next patch
>
>  drivers/vhost/vringh.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index a1e27da54481..0ba3ef809e48 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh,
>         if (ret < 0)
>                 return ret;
>
> -       kaddr = kmap_atomic(iov.bv_page);
> +       kaddr = kmap_local_page(iov.bv_page);
>         from = kaddr + iov.bv_offset;
>         *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
> -       kunmap_atomic(kaddr);
> +       kunmap_local(kaddr);
>
>         return 0;
>  }
> @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh,
>         if (ret < 0)
>                 return ret;
>
> -       kaddr = kmap_atomic(iov.bv_page);
> +       kaddr = kmap_local_page(iov.bv_page);
>         to = kaddr + iov.bv_offset;
>         WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
> -       kunmap_atomic(kaddr);
> +       kunmap_local(kaddr);
>
>         return 0;
>  }
> --
> 2.39.2
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ