lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <805b4f95-5351-b342-7177-6a3df979be17@gmail.com>
Date:   Wed, 13 Apr 2022 18:59:49 +0900
From:   Taehee Yoo <ap420073@...il.com>
To:     Igor Russkikh <irusskikh@...vell.com>, davem@...emloft.net,
        kuba@...nel.org, pabeni@...hat.com, netdev@...r.kernel.org,
        ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
        john.fastabend@...il.com, andrii@...nel.org, kafai@...com,
        songliubraving@...com, yhs@...com, kpsingh@...nel.org,
        bpf@...r.kernel.org
Subject: Re: [EXT] [PATCH net-next v4 0/3] net: atlantic: Add XDP support

2022. 4. 13. 오후 4:52에 Igor Russkikh 이(가) 쓴 글:

Hi Igor,

Thank you so much for your review!

 >
 >
 >> v4:
 >>   - Fix compile warning
 >>
 >> v3:
 >>   - Change wrong PPS performance result 40% -> 80% in single
 >>     core(Intel i3-12100)
 >>   - Separate aq_nic_map_xdp() from aq_nic_map_skb()
 >>   - Drop multi buffer packets if single buffer XDP is attached
 >>   - Disable LRO when single buffer XDP is attached
 >>   - Use xdp_get_{frame/buff}_len()
 >
 > Hi Taehee, thanks for taking care of that!
 >
 > Reviewed-by: Igor Russkikh <irusskikh@...vell.com>
 >
 > A small notice about the selection of 3K packet size for XDP.
 > Its a kind of compromise I think, because with common 1.4K MTU we'll 
get wasted
 > 2K bytes minimum per packet.
 >
 > I was thinking it would be possible to reuse the existing page 
flipping technique
 > together with higher page_order, to keep default 2K fragment size.
 > E.g.
 > ( 256(xdp_head)+2K(pkt frag) ) x 3 (flips) = ~7K
 >
 > Meaning we can allocate 8K (page_order=1) pages, and fit three xdp 
packets into each, wasting only 1K per three packets.
 >
 > But its just kind of an idea for future optimization.
 >

Yes, I fully agree with your idea.
When I developed an initial version of this patchset, I simply tried 
that idea.
I expected to reduce CPU utilization(not for memory optimization), but 
there is no difference because page_ref_{inc/dec}() cost is too high.
So, if we tried to switch from MEM_TYPE_PAGE_ORDER0 to 
MEM_TYPE_PAGE_SHARED, I think we should use a littie bit different 
flipping strategy like ixgbe.
If so, we would achieve memory optimization and CPU optimization.

Thanks a lot,
Taehee Yoo

 > Regards,
 >    Igor

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ