lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Jul 2022 06:27:00 +0800 (CST)
From:   "Chen Lin" <chen45464546@....com>
To:     "Alexander Duyck" <alexander.duyck@...il.com>
Cc:     "Maurizio Lombardi" <mlombard@...hat.com>,
        "Jakub Kicinski" <kuba@...nel.org>,
        "Andrew Morton" <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Netdev <netdev@...r.kernel.org>
Subject: Re:Re: Re: [PATCH V3] mm: prevent page_frag_alloc() from corrupting
 the memory

At 2022-07-18 23:33:42, "Alexander Duyck" <alexander.duyck@...il.com> wrote:
>On Mon, Jul 18, 2022 at 8:25 AM Maurizio Lombardi <mlombard@...hat.com> wrote:
>>
>> po 18. 7. 2022 v 16:40 odesílatel Chen Lin <chen45464546@....com> napsal:
>> >
>> > But the original intention of page frag interface is indeed to allocate memory
>> > less than one page. It's not a good idea to  complicate the definition of
>> > "page fragment".
>>
>> I see your point, I just don't think it makes much sense to break
>> drivers here and there
>> when a practically identical 2-lines patch can fix the memory corruption bug
>> without changing a single line of code in the drivers.
>>
>> By the way, I will wait for the maintainers to decide on the matter.
>>
>> Maurizio
>
>I'm good with this smaller approach. If it fails only under memory
>pressure I am good with that. The issue with the stricter checking is
>that it will add additional overhead that doesn't add much value to
>the code.
>
>Thanks,
>

>- Alex

One additional question:
I don't understand too much about  why point >A<  have more overhead than point >B<. 
It all looks the same, except for jumping to the refill process, and the refill is a very long process.
Could you please give more explain?

	if (unlikely(offset < 0)) {
                 -------------->A<------------
		page = virt_to_page(nc->va);

		if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
			goto refill;

		if (unlikely(nc->pfmemalloc)) {
			free_the_page(page, compound_order(page));
			goto refill;
		}

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
		/* if size can vary use size else just use PAGE_SIZE */
		size = nc->size;
#endif
		/* OK, page count is 0, we can safely set it */
		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);

		/* reset page count bias and offset to start of new frag */
		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
		offset = size - fragsz;
                 -------------->B<------------
	}

Powered by blists - more mailing lists