[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtUq72KULin=9onhf=7o5XwzR79E7QBdgg+ny1gYQGRvzw@mail.gmail.com>
Date: Fri, 25 Jun 2021 18:40:30 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>, ngupta@...are.org,
senozhatsky@...omium.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two
atomic ops in zs_pool_dec_isolated()
On Fri, Jun 25, 2021 at 5:32 PM Miaohe Lin <linmiaohe@...wei.com> wrote:
>
> On 2021/6/25 16:46, Miaohe Lin wrote:
> > On 2021/6/25 15:29, Muchun Song wrote:
> >> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <linmiaohe@...wei.com> wrote:
> >>>
> >>> On 2021/6/25 13:01, Muchun Song wrote:
> >>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <linmiaohe@...wei.com> wrote:
> >>>>>
> >>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> >>>>> atomic_long_read() == 0. Use it to make code more succinct.
> >>>>
> >>>> Actually, they are not equal. atomic_long_dec_and_test implies a
> >>>> full memory barrier around it but atomic_long_dec and atomic_long_read
> >>>> don't.
> >>>>
> >>>
> >>> Many thanks for comment. They are indeed not completely equal as you said.
> >>> What I mean is they can do the same things we want in this specified context.
> >>> Thanks again.
> >>
> >> I don't think so. Using individual operations can eliminate memory barriers.
> >> We will pay for the barrier if we use atomic_long_dec_and_test here.
> >
> > The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
> > weird. I think it's worth to do this with the cost of barrier.
> >
>
> It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
> is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:
>
> zs_pool_dec_isolated zs_unregister_migration
> pool->destroying != true
> pool->destroying = true;
> smp_mb();
> wait_for_isolated_drain
> wait_event with atomic_long_read(&pool->isolated_pages) != 0
> atomic_long_dec(&pool->isolated_pages);
> atomic_long_read(&pool->isolated_pages) == 0
I am not familiar with zsmalloc. So I do not know whether the race
that you mentioned above exists. But If it exists, the fix also does
not make sense to me. If there should be inserted a smp_mb between
atomic_long_dec and atomic_long_read, you should insert
smp_mb__after_atomic instead of using atomic_long_dec_and_test.
Because smp_mb__after_atomic can be optimized on certain architecture
(e.g. x86_64).
Thanks.
>
> Thus wake_up_all is missed.
> And the comment in zs_pool_dec_isolated() said:
> /*
> * There's no possibility of racing, since wait_for_isolated_drain()
> * checks the isolated count under &class->lock after enqueuing
> * on migration_wait.
> */
>
> But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
> is possible. Does this make senses for you ?
> Thanks.
>
> >>
> >>>
> >>>> That RMW operations that have a return value is equal to the following.
> >>>>
> >>>> smp_mb__before_atomic()
> >>>> non-RMW operations or RMW operations that have no return value
> >>>> smp_mb__after_atomic()
> >>>>
> >>>> Thanks.
> >>>>
> >>>>>
> >>>>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> >>>>> ---
> >>>>> mm/zsmalloc.c | 3 +--
> >>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
> >>>>>
> >>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >>>>> index 1476289b619f..0b4b23740d78 100644
> >>>>> --- a/mm/zsmalloc.c
> >>>>> +++ b/mm/zsmalloc.c
> >>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> >>>>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> >>>>> {
> >>>>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> >>>>> - atomic_long_dec(&pool->isolated_pages);
> >>>>> /*
> >>>>> * There's no possibility of racing, since wait_for_isolated_drain()
> >>>>> * checks the isolated count under &class->lock after enqueuing
> >>>>> * on migration_wait.
> >>>>> */
> >>>>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> >>>>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> >>>>> wake_up_all(&pool->migration_wait);
> >>>>> }
> >>>>>
> >>>>> --
> >>>>> 2.23.0
> >>>>>
> >>>> .
> >>>>
> >>>
> >> .
> >>
> >
>
Powered by blists - more mailing lists