lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e359abef-316f-4fca-8d1d-84b69c8bc060@lucifer.local>
Date: Thu, 17 Oct 2024 10:59:56 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: jeffxu@...omium.org
Cc: akpm@...ux-foundation.org, keescook@...omium.org,
        torvalds@...ux-foundation.org, usama.anjum@...labora.com,
        corbet@....net, Liam.Howlett@...cle.com, jeffxu@...gle.com,
        jorgelo@...omium.org, groeck@...omium.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        linux-mm@...ck.org, jannh@...gle.com, sroettger@...gle.com,
        pedro.falcato@...il.com, linux-hardening@...r.kernel.org,
        willy@...radead.org, gregkh@...uxfoundation.org, deraadt@...nbsd.org,
        surenb@...gle.com, merimus@...gle.com, rdunlap@...radead.org
Subject: Re: [PATCH] munmap sealed memory cause memory to split (bug)

OK having said all of the below I think I know exactly what this is...

When an munmap() operation aborts due to error it does not attempt to
re-merge previously split VMAs so you might observe more splits than you
expect.

This is not a bug, it's expected behaviour. We do intend to address this
going forward re: the splits, with an intent to see if we can re-merge or
otherwise change the ordering so if an unmap fails you observe the same VMA
layout.

Before Liam's series I believe you'd see the unsealed portion cleared and
there'd be no recovery so that part of the VMA would just be gone. now we
recover it.

In any case it's absolutely standard in Linux for a task that performs
compound work all of which might fail that, on failure of the overall
operation, that it may be partially fulfilled, partially not.

So yeah, not a bug.


On Thu, Oct 17, 2024 at 10:46:10AM +0100, Lorenzo Stoakes wrote:
> Another thing about etiquette - sending a barely coherent _failing_ test
> with basically zero explanation as a... patch is NOT how to interact with
> the upstream community.
>
> The sensible, respectful and workable way of doing this is to send
> something like a [DISCUSSION] or something and say 'hey guys I think I
> found a bug' with an explanation and a test patch attached.
>
> A lot of your problems with the community could be resolved by being more
> polite, respectful and taking a step back and breathing and _communicating_
> with us who are here to try to help fix problems.
>
> We are all _extremely_ busy, I am ill today also, so taking the time to try
> to explain problems patiently instead of firing off barely documented
> patches is far more likely to get you good results.
>
> Also you fail to actually say what the problem is, what fails, where yoru
> test fails etc.
>
> Anyway, let's try to decode (please take this as input as to how you should
> try to communicate these things):
>
>
> So we start with a VMA like this:
>
> 012345678901
> xxxxxxxxxxxx
>
> We then seal the middle, starting at offset 4:
>
> 012345678901
> xxxx****xxxx
>
> This sets the VM_SEALED flag in the middle and splits VMAs resulting in 3
> VMAs.
>
> We then attempt to unmap 4 pages from offset 2, but this fails, as
> expected.
>
> 012345678901
> xxxx****xxxx
>   |--| fail
>
> We then attempt to unmap 4 pages from offset 6, but this fails, as
> expected.
>
> 012345678901
> xxxx****xxxx
>       |--| fail
>
> At each stage we should observe 4 VMAs.
>
> Are you suggesting there is a larger unexpected split? Where? Under what
> circumstances?
>
> Let's figure out if there's a problem here _together_.
>
> On Thu, Oct 17, 2024 at 02:26:27AM +0000, jeffxu@...omium.org wrote:
> > From: Jeff Xu <jeffxu@...gle.com>
> >
> > It appears there is a regression on the latest mm,
> > when munmap sealed memory, it can cause unexpected VMA split.
> > E.g. repro use this test.
> > ---
> >  tools/testing/selftests/mm/mseal_test.c | 76 +++++++++++++++++++++++++
> >  1 file changed, 76 insertions(+)
> >
> > diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
> > index fa74dbe4a684..0af33e13b606 100644
> > --- a/tools/testing/selftests/mm/mseal_test.c
> > +++ b/tools/testing/selftests/mm/mseal_test.c
> > @@ -1969,6 +1969,79 @@ static void test_madvise_filebacked_was_writable(bool seal)
> >  	REPORT_TEST_PASS();
> >  }
> >
> > +static void test_munmap_free_multiple_ranges_with_split(bool seal)
> > +{
> > +	void *ptr;
> > +	unsigned long page_size = getpagesize();
> > +	unsigned long size = 12 * page_size;
> > +	int ret;
> > +	int prot;
> > +
> > +	setup_single_address(size, &ptr);
> > +	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
> > +
> > +	/* seal the middle 4 page */
> > +	if (seal) {
> > +		ret = sys_mseal(ptr + 4 * page_size, 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(!ret);
> > +
> > +		size = get_vma_size(ptr, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +
> > +		size = get_vma_size(ptr +  4 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +
> > +		size = get_vma_size(ptr +  8 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +	}
> > +
> > +	/* munmap 4  pages from the third page */
> > +	ret = sys_munmap(ptr + 2 * page_size, 4 * page_size);
> > +	if (seal) {
> > +		FAIL_TEST_IF_FALSE(ret);
> > +		FAIL_TEST_IF_FALSE(errno == EPERM);
> > +
> > +		size = get_vma_size(ptr, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +
> > +		size = get_vma_size(ptr +  4 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +
> > +		size = get_vma_size(ptr +  8 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +	} else
> > +		FAIL_TEST_IF_FALSE(!ret);
> > +
> > +	/* munmap 4 pages from the sealed page */
> > +	ret = sys_munmap(ptr + 6 * page_size, 4 * page_size);
> > +	if (seal) {
> > +		FAIL_TEST_IF_FALSE(ret);
> > +		FAIL_TEST_IF_FALSE(errno == EPERM);
> > +
> > +		size = get_vma_size(ptr + 4 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
>
> This is repeated below, presumably you mean to do size = get_vma_size(ptr,
> &prot) here?
>
> > +
> > +		size = get_vma_size(ptr +  4 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +
> > +		size = get_vma_size(ptr +  8 * page_size, &prot);
> > +		FAIL_TEST_IF_FALSE(size == 4 * page_size);
> > +		FAIL_TEST_IF_FALSE(prot == 4);
> > +	} else
> > +		FAIL_TEST_IF_FALSE(!ret);
> > +
> > +	REPORT_TEST_PASS();
> > +}
> > +
> > +
> >  int main(int argc, char **argv)
> >  {
> >  	bool test_seal = seal_support();
> > @@ -2099,5 +2172,8 @@ int main(int argc, char **argv)
> >  	test_madvise_filebacked_was_writable(false);
> >  	test_madvise_filebacked_was_writable(true);
> >
> > +	test_munmap_free_multiple_ranges_with_split(false);
> > +	test_munmap_free_multiple_ranges_with_split(true);
> > +
> >  	ksft_finished();
> >  }
> > --
> > 2.47.0.rc1.288.g06298d1525-goog
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ