lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87v8v5kfem.fsf@nvdebian.thelocal>
Date:   Tue, 19 Apr 2022 11:51:58 +1000
From:   Alistair Popple <apopple@...dia.com>
To:     Jason Gunthorpe <jgg@...pe.ca>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, christian.koenig@....com,
        jhubbard@...dia.com, rcampbell@...dia.com
Subject: Re: [PATCH] mm/mmu_notifier.c: Fix race in
 mmu_interval_notifier_remove()

Jason Gunthorpe <jgg@...pe.ca> writes:

> On Thu, Apr 14, 2022 at 01:18:10PM +1000, Alistair Popple wrote:
>> In some cases it is possible for mmu_interval_notifier_remove() to race
>> with mn_tree_inv_end() allowing it to return while the notifier data
>> structure is still in use. Consider the following sequence:
>>
>> CPU0 - mn_tree_inv_end()            CPU1 - mmu_interval_notifier_remove()
>>                                     spin_lock(subscriptions->lock);
>>                                     seq = subscriptions->invalidate_seq;
>> spin_lock(subscriptions->lock);     spin_unlock(subscriptions->lock);
>> subscriptions->invalidate_seq++;
>>                                     wait_event(invalidate_seq != seq);
>>                                     return;
>> interval_tree_remove(interval_sub); kfree(interval_sub);
>> spin_unlock(subscriptions->lock);
>> wake_up_all();
>>
>> As the wait_event() condition is true it will return immediately. This
>> can lead to use-after-free type errors if the caller frees the data
>> structure containing the interval notifier subscription while it is
>> still on a deferred list. Fix this by changing invalidate_seq to an
>> atomic type as it is read outside of the lock and moving the increment
>> until after deferred lists have been updated.
>
> Oh, yes, that is a mistake.
>
> I would not solve it with more unlocked atomics though, this is just a
> simple case of a missing lock - can you look at this and if you like
> it post it as a patch please?

Yep, that looks good and is easier to understand. For some reason I had assumed
the lack of locking was intentional. Will post the below fix as v2.

> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 459d195d2ff64b..f45ff1b7626a62 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -1036,6 +1036,18 @@ int mmu_interval_notifier_insert_locked(
>  }
>  EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
>
> +static bool
> +mmu_interval_seq_released(struct mmu_notifier_subscriptions *subscriptions,
> +			  unsigned long seq)
> +{
> +	bool ret;
> +
> +	spin_lock(&subscriptions->lock);
> +	ret = subscriptions->invalidate_seq != seq;
> +	spin_unlock(&subscriptions->lock);
> +	return ret;
> +}
> +
>  /**
>   * mmu_interval_notifier_remove - Remove a interval notifier
>   * @interval_sub: Interval subscription to unregister
> @@ -1083,7 +1095,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub)
>  	lock_map_release(&__mmu_notifier_invalidate_range_start_map);
>  	if (seq)
>  		wait_event(subscriptions->wq,
> -			   READ_ONCE(subscriptions->invalidate_seq) != seq);
> +			   mmu_interval_seq_released(subscriptions, seq));
>
>  	/* pairs with mmgrab in mmu_interval_notifier_insert() */
>  	mmdrop(mm);
>
> Signed-off-by: Jason Gunthorpe <jgg@...dia.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ