lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYVJHsWoaEkTiTigJRzSNBrRSg3YVAL3Q5Q96cLSNJZrQ@mail.gmail.com>
Date: Mon, 18 Mar 2024 16:05:40 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Barry Song <21cnbao@...il.com>
Cc: hannes@...xchg.org, nphamcs@...il.com, akpm@...ux-foundation.org, 
	chrisl@...nel.org, v-songbaohua@...o.com, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, ira.weiny@...el.com, 
	syzbot+adbc983a1588b7805de3@...kaller.appspotmail.com
Subject: Re: [PATCH] mm: zswap: fix kernel BUG in sg_init_one

On Mon, Mar 18, 2024 at 4:00 PM Barry Song <21cnbao@...il.com> wrote:
>
> From: Barry Song <v-songbaohua@...o.com>
>
> sg_init_one() relies on linearly mapped low memory for the safe
> utilization of virt_to_page(). Consequently, we have two choices:
> either employ kmap_to_page() alongside sg_set_page(), or resort to
> copying high memory contents to a temporary buffer residing in low
> memory. However, considering the introduction of the WARN_ON_ONCE
> in commit ef6e06b2ef870 ("highmem: fix kmap_to_page() for
> kmap_local_page() addresses"), which specifically addresses high
> memory concerns, it appears that memcpy remains the sole viable
> option.
>
> Reported-and-tested-by: syzbot+adbc983a1588b7805de3@...kaller.appspotmailcom
> Closes: https://lore.kernel.org/all/000000000000bbb3d80613f243a6@google.com/
> Fixes: 270700dd06ca ("mm/zswap: remove the memcpy if acomp is not sleepable")
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
>  mm/zswap.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 9dec853647c8..17bf6d87b274 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1080,7 +1080,8 @@ static void zswap_decompress(struct zswap_entry *entry, struct page *page)
>         mutex_lock(&acomp_ctx->mutex);
>
>         src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
> -       if (acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) {
> +       if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) ||
> +           !virt_addr_valid(src)) {


Would it be better to explicitly check is_kmap_addr() here? I am
particularly worried about hiding a bug where the returned address
from zpool_map_handle() is not a kmap address, but also not a valid
linear mapping address.

If we use is_kmap_addr() here, then the virt_addr_valid() check in
sg_init_one() will catch any non-kmap non-linear mapping addresses.
WDYT? Am I being paranoid? :)

Also, I think a comment would be nice to explain the cases where we
need to use a temporary buffer since we have two different cases now.

>
>                 memcpy(acomp_ctx->buffer, src, entry->length);
>                 src = acomp_ctx->buffer;
>                 zpool_unmap_handle(zpool, entry->handle);
> @@ -1094,7 +1095,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct page *page)
>         BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
>         mutex_unlock(&acomp_ctx->mutex);
>
> -       if (!acomp_ctx->is_sleepable || zpool_can_sleep_mapped(zpool))
> +       if (src != acomp_ctx->buffer)
>                 zpool_unmap_handle(zpool, entry->handle);
>  }
>
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ