[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAATkVExrEiETydLxuHjpy9hwXhNx9Cq1jP1-4f_2TiGiZZ=qCQ@mail.gmail.com>
Date: Fri, 3 Jan 2014 18:27:11 -0500
From: Debabrata Banerjee <dbavatar@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Michael Dalton <mwdalton@...gle.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Rusty Russell <rusty@...tcorp.com.au>, mst@...hat.com,
jasowang@...hat.com, virtualization@...ts.linux-foundation.org,
"Banerjee, Debabrata" <dbanerje@...mai.com>, jbaron@...mai.com,
Joshua Hunt <johunt@...mai.com>
Subject: Re: [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Fri, Jan 3, 2014 at 5:54 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> This is in GFP_ATOMIC cases, I dont think it can ever start compaction.
I think that's right I probably finally got it back to normal behavior
with order-0 allocations.
>
> It seems that you shoot the messenger : If memory is fragmented, then
> one order-1 allocation is going to start compaction.
>
> It can be a simple fork().
>
> If your workload never fork(), then yes, you never needed compaction.
>
Sure but the rate of network packets in and out and subsequent
allocations would be more equivalent to a fork bomb than normal
forking. I understand mm should work more sanely in this scenario but
at the same time we see a bad regression with this code, I see we're
not alone.
>
> We are not trying to optimize the kernel behavior for hosts in deep
> memory pressure.
We're leaving about half for the kernel so I wouldn't call it "deep".
Any server application that is using page cache and mlocked memory
will run into similar issues.
> It doesn't really matter to say that which memory allocation triggered
> compaction, which is a normal step in mm layer.
>
> If you believe its badly done, you should ask to mm guys to fix/improve
> it, not netdev...
>
> Using order-3 pages in TCP stack improves performance for 99% of the
> hosts, there might be something wrong on your side ?
>
Having lots of memory mlocked is bad right now yes, but not
necessarily an uncommon scenario. We're handing mm an almost
intractable problem. I see compaction of mlocked pages has been
discussed a few times over there, but no patch has actually made it
in.
-Debabrata
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists