lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuNkwGNw=Nnu1MVOewKiqT0ahj5DkKV_Z4VDqSpu+v=vmw@mail.gmail.com>
Date: Wed, 24 Jan 2024 21:28:50 -0800
From: Chris Li <chriscli@...gle.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>, 
	Nhat Pham <nphamcs@...il.com>, Chengming Zhou <zhouchengming@...edance.com>, 
	Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()

Hi Yosry,

On Tue, Jan 23, 2024 at 10:58 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>

> >
> > Thanks for the great analysis, I missed the swapoff/swapon race myself :)
> >
> > The first solution that came to mind for me was refcounting the zswap
> > tree with RCU with percpu-refcount, similar to how cgroup refs are
> > handled (init in zswap_swapon() and kill in zswap_swapoff()). I think
> > the percpu-refcount may be an overkill in terms of memory usage
> > though. I think we can still do our own refcounting with RCU, but it
> > may be more complicated.
>
> FWIW, I was able to reproduce the problem in a vm with the following
> kernel diff:

Thanks for the great find.

I was worry about the usage after free situation in this email:

https://lore.kernel.org/lkml/CAF8kJuOvOmn7wmKxoqpqSEk4gk63NtQG1Wc+Q0e9FZ9OFiUG6g@mail.gmail.com/

Glad you are able to find a reproducible case. That is one of the
reasons I change the free to invalidate entries in my xarray patch.

I think the swap_off code should remove the entry from the tree, just
wait for each zswap entry to drop to zero.  Then free it.

That way you shouldn't need to refcount the tree. The tree refcount is
effectively the combined refcount of all the zswap entries.
Having refcount on the tree would be very high contention.

Chris

> diff --git a/mm/zswap.c b/mm/zswap.c
> index 78df16d307aa8..6580a4be52a18 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -880,6 +880,9 @@ static enum lru_status shrink_memcg_cb(struct
> list_head *item, struct list_lru_o
>          */
>         spin_unlock(lock);
>
> +       pr_info("Sleeping in shrink_memcg_cb() before
> spin_lock(&tree->lock)\n");
> +       schedule_timeout_uninterruptible(msecs_to_jiffies(10 * 1000));
> +
>         /* Check for invalidate() race */
>         spin_lock(&tree->lock);
>         if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
>
> This basically expands the race window to 10 seconds. I have a
> reproducer script attached that utilizes the zswap shrinker (which
> makes this much easier to reproduce btw). The steps are as follows:
> - Compile the kernel with the above diff, and both ZRAM & KASAN enabled.
> - In one terminal, run zswap_wb_swapoff_race.sh.
> - In a different terminal, once the "Sleeping in shrink_memcg_cb()"
> message is logged, run "swapoff /dev/zram0".
> - In the first terminal, once the 10 seconds elapse, I get a UAF BUG
> from KASAN (log attached as well).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ