lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 5 Jan 2024 16:40:15 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Mina Almasry <almasrymina@...gle.com>
CC: Shakeel Butt <shakeelb@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
	<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
	<bpf@...r.kernel.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar
	<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen
	<dave.hansen@...ux.intel.com>, <x86@...nel.org>, "H. Peter Anvin"
	<hpa@...or.com>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, "Rafael J.
 Wysocki" <rafael@...nel.org>, Sumit Semwal <sumit.semwal@...aro.org>,
	Christian König <christian.koenig@....com>, Michael Chan
	<michael.chan@...adcom.com>, "David S. Miller" <davem@...emloft.net>, Eric
 Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Alexei
 Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Jesper
 Dangaard Brouer <hawk@...nel.org>, John Fastabend <john.fastabend@...il.com>,
	Wei Fang <wei.fang@....com>, Shenwei Wang <shenwei.wang@....com>, Clark Wang
	<xiaoning.wang@....com>, NXP Linux Team <linux-imx@....com>, Jeroen de Borst
	<jeroendb@...gle.com>, Praveen Kaligineedi <pkaligineedi@...gle.com>,
	Shailend Chand <shailend@...gle.com>, Yisen Zhuang <yisen.zhuang@...wei.com>,
	Salil Mehta <salil.mehta@...wei.com>, Jesse Brandeburg
	<jesse.brandeburg@...el.com>, Tony Nguyen <anthony.l.nguyen@...el.com>,
	Thomas Petazzoni <thomas.petazzoni@...tlin.com>, Marcin Wojtas
	<mw@...ihalf.com>, Russell King <linux@...linux.org.uk>, Sunil Goutham
	<sgoutham@...vell.com>, Geetha sowjanya <gakula@...vell.com>, Subbaraya
 Sundeep <sbhatta@...vell.com>, hariprasad <hkelam@...vell.com>, Felix Fietkau
	<nbd@....name>, John Crispin <john@...ozen.org>, Sean Wang
	<sean.wang@...iatek.com>, Mark Lee <Mark-MC.Lee@...iatek.com>, Lorenzo
 Bianconi <lorenzo@...nel.org>, Matthias Brugger <matthias.bgg@...il.com>,
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>, Saeed
 Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Horatiu
 Vultur <horatiu.vultur@...rochip.com>, <UNGLinuxDriver@...rochip.com>, "K. Y.
 Srinivasan" <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>, Wei
 Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>, Jassi Brar
	<jaswinder.singh@...aro.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
	Alexandre Torgue <alexandre.torgue@...s.st.com>, Jose Abreu
	<joabreu@...opsys.com>, Maxime Coquelin <mcoquelin.stm32@...il.com>,
	Siddharth Vadapalli <s-vadapalli@...com>, Ravi Gunasekaran
	<r-gunasekaran@...com>, Roger Quadros <rogerq@...nel.org>, Jiawen Wu
	<jiawenwu@...stnetic.com>, Mengyuan Lou <mengyuanlou@...-swift.com>, Ronak
 Doshi <doshir@...are.com>, VMware PV-Drivers Reviewers
	<pv-drivers@...are.com>, Ryder Lee <ryder.lee@...iatek.com>, Shayne Chen
	<shayne.chen@...iatek.com>, Kalle Valo <kvalo@...nel.org>, Juergen Gross
	<jgross@...e.com>, Stefano Stabellini <sstabellini@...nel.org>, Oleksandr
 Tyshchenko <oleksandr_tyshchenko@...m.com>, Andrii Nakryiko
	<andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu
	<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, KP Singh
	<kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo
	<haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, Stefan Hajnoczi
	<stefanha@...hat.com>, Stefano Garzarella <sgarzare@...hat.com>, Shuah Khan
	<shuah@...nel.org>, Mickaël Salaün <mic@...ikod.net>,
	Nathan Chancellor <nathan@...nel.org>, Nick Desaulniers
	<ndesaulniers@...gle.com>, Bill Wendling <morbo@...gle.com>, Justin Stitt
	<justinstitt@...gle.com>, Jason Gunthorpe <jgg@...dia.com>, Willem de Bruijn
	<willemdebruijn.kernel@...il.com>
Subject: Re: [RFC PATCH net-next v1 4/4] net: page_pool: use netmem_t instead
 of struct page in API

On 2024/1/5 2:24, Mina Almasry wrote:
> On Thu, Jan 4, 2024 at 12:48 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> On 2024/1/4 2:38, Mina Almasry wrote:
>>> On Wed, Jan 3, 2024 at 1:47 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>>>
>>>> On 2024/1/3 0:14, Mina Almasry wrote:
>>>>>
>>>>> The idea being that skb_frag_page() can return NULL if the frag is not
>>>>> paged, and the relevant callers are modified to handle that.
>>>>
>>>> There are many existing drivers which are not expecting NULL returning for
>>>> skb_frag_page() as those drivers are not supporting devmem, adding additionl
>>>> checking overhead in skb_frag_page() for those drivers does not make much
>>>> sense, IMHO, it may make more sense to introduce a new helper for the driver
>>>> supporting devmem or networking core that needing dealing with both normal
>>>> page and devmem.
>>>>
>>>> And we are also able to keep the old non-NULL returning semantic for
>>>> skb_frag_page().
>>>
>>> I think I'm seeing agreement that the direction we're heading into
>>> here is that most net stack & drivers should use the abstract netmem
>>
>> As far as I see, at least for the drivers, I don't think we have a clear
>> agreement if we should have a unified driver facing struct or API for both
>> normal page and devmem yet.
>>
> 
> To be honest I definitely read that we have agreement that we should
> have a unified driver facing struct from the responses in this thread
> like this one:
> 
> https://lore.kernel.org/netdev/20231215190126.1040fa12@kernel.org/

Which specific comment made you thinking as above?
I think it definitely need clarifying here, as I read it differently as
you did.

> 
> But I'll let folks correct me if I'm wrong.
> 
>>> type, and only specific code that needs a page or devmem (like
>>> tcp_receive_zerocopy or tcp_recvmsg_dmabuf) will be the ones that
>>> unpack the netmem and get the underlying page or devmem, using
>>> skb_frag_page() or something like skb_frag_dmabuf(), etc.
>>>
>>> As Jason says repeatedly, I'm not allowed to blindly cast a netmem to
>>> a page and assume netmem==page. Netmem can only be cast to a page
>>> after checking the low bits and verifying the netmem is actually a
>>
>> I thought it would be best to avoid casting a netmem or devmem to a
>> page in the driver, I think the main argument is that it is hard
>> to audit very single driver doing a checking before doing the casting
>> in the future? and we can do better auditting if the casting is limited
>> to a few core functions in the networking core.
>>
> 
> Correct, the drivers should never cast directly, but helpers like
> skb_frag_page() must check that the netmem is a page before doing a
> cast.
> 
>>> page. I think any suggestions that blindly cast a netmem to page
>>> without the checks will get nacked by Jason & Christian, so the
>>> checking in the specific cases where the code needs to know the
>>> underlying memory type seems necessary.
>>>
>>> IMO I'm not sure the checking is expensive. With likely/unlikely &
>>> static branches the checks should be very minimal or a straight no-op.
>>> For example in RFC v2 where we were doing a lot of checks for devmem
>>> (we don't do that anymore for RFCv5), I had run the page_pool perf
>>> tests and proved there is little to no perf regression:
>>
>> For MAX_SKB_FRAGS being 17, it means we may have 17 additional checking
>> overhead for the drivers not supporting devmem, not to mention we may
>> have bigger value for MAX_SKB_FRAGS if BIG TCP is enable.
>>
> 
> With static branch the checks should be complete no-ops unless the
> user's set up enabled devmem.

What if the user does set up enabled devmem and still want to enable
page_pool for normal page in the same system?

Is there a reason I don't know, which stops you from keeping the old
helper and introducing a new helper if it is needed for the new netmem
thing?

> 
>> Even there is no notiable performance degradation for a specific case,
>> we should avoid the overhead as much as possible for the existing use
>> case when supporting a new use case.
>>
>>>
>>> https://lore.kernel.org/netdev/CAHS8izM4w2UETAwfnV7w+ZzTMxLkz+FKO+xTgRdtYKzV8RzqXw@mail.gmail.com/
>>
>> The above test case does not even seems to be testing a code path calling
>> skb_frag_page() as my understanding.
>>
>>>
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ