lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Jan 2014 20:59:13 -0500
From:	Debabrata Banerjee <dbavatar@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	David Miller <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>, mwdalton@...gle.com,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	rusty@...tcorp.com.au, mst@...hat.com, jasowang@...hat.com,
	virtualization@...ts.linux-foundation.org,
	"Banerjee, Debabrata" <dbanerje@...mai.com>, jbaron@...mai.com,
	Joshua Hunt <johunt@...mai.com>
Subject: Re: [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill

On Thu, Jan 2, 2014 at 8:26 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote:
>
>>
>> My suggestion is to use a recent kernel, and/or eventually backport the
>> mm fixes if any.
>>
>> order-3 allocations should not reclaim 2GB out of 8GB.
>>
>> There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3

Sorry 2GB cache out of 8GB phys, ~1GB gets reclaimed. Regardless the
reclaimation of cache is minor compared to the compaction event that
precedes it, I haven't spotted something addressing that yet -
isolate_migratepages_range ()/compact_checklock_irqsave(). If even
more of memory was unmoveable, the compaction routines would be hit
even harder as reclaimation wouldn't do anything - mm would have to
get very very smart about unmoveable pages being freed and just fail
allocations/oom kill if nothing has changed without running through
compaction/reclaim fruitlessly.

I guess this is a bit of a tangent since what I'm saying proves the
patch from Michael doesn't make this behavior worse.

>
> Hmm... it looks like I missed __GFP_NORETRY
>
>
>
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 5393b4b719d7..5f42a4d70cb2 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
>                 gfp_t gfp = prio;
>
>                 if (order)
> -                       gfp |= __GFP_COMP | __GFP_NOWARN;
> +                       gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
>                 pfrag->page = alloc_pages(gfp, order);
>                 if (likely(pfrag->page)) {
>                         pfrag->offset = 0;
>
>
>

Yes this seems like it will make the situation better, but one send()
may still cause a direct_compact and direct_reclaim() cycle to happen,
followed immediately by another direct_compact() if direct_reclaim()
didn't free an order-3. Now have all cpu's doing a send(), you can
still get heavy spinlock contention in the routines described above.
The major change I see here is that allocations > order-0 used to be
rare, now it's on every send().

I can try your patch to see how much things improve.

-Debabrata
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists