[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e91baaba-e00a-4b16-0787-e9460dacfbb9@redhat.com>
Date: Wed, 2 Jun 2021 15:50:27 -0400
From: Jon Maloy <jmaloy@...hat.com>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: ying.xue@...driver.com, David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, etdev@...r.kernel.org,
tipc-discussion@...ts.sourceforge.net,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: The value of FB_MTU eats two pages
On 6/1/21 10:18 AM, Menglong Dong wrote:
> Hello!
>
> I have a question about the value of FB_MTU in tipc, how does the '3744' form?
> I notice that it is used in 'tipc_msg_build()' when memory allocation
> fails, and it
> tries to fall back to a smaller MTU to avoid unnecessary sending failures.
>
> However, the size of the data allocated will be more than 4096 when FB_MTU
> is 3744. I did a rough calculation, the size of data will more than 4200:
>
> (FB_MTU + TIPCHDR + BUF_HEADROOM + sizeof(struct skb_shared_info))
>
> Therefore, 8192 will be allocated from slab, and about 4000 of it will
> not be used.
>
> FB_MTU is used for low memory, and I think eating two pages will make it worse.
> Do I miss something?
>
> Thanks!
> Menglong Dong
>
Hi Dong,
The value is based on empiric knowledge.
When I determined it I made a small loop in a kernel driver where I
allocated skbs (using tipc_buf_acquire) with an increasing size
(incremented with 1 each iteration), and then printed out the
corresponding truesize.
That gave the value we are using now.
Now, when re-running the test I get a different value, so something has
obviously changed since then.
[ 1622.158586] skb(513) =>> truesize 2304, prev skb(512) => prev
truesize 1280
[ 1622.162074] skb(1537) =>> truesize 4352, prev skb(1536) => prev
truesize 2304
[ 1622.165984] skb(3585) =>> truesize 8448, prev skb(3584) => prev
truesize 4352
As you can see, the optimal value now, for an x86_64 machine compiled
with gcc, is 3584 bytes, not 3744.
Feel free to post a patch for this if you want to.
Thanks
///Jon Maloy
Powered by blists - more mailing lists