[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24b9161d-6488-72f6-d392-751e2723d4ff@huawei.com>
Date: Thu, 19 Jan 2023 15:00:35 +0800
From: shaozhengchao <shaozhengchao@...wei.com>
To: Jamal Hadi Salim <jhs@...atatu.com>,
Davide Caratti <dcaratti@...hat.com>
CC: Kyle Zeng <zengyhkyle@...il.com>,
David Miller <davem@...emloft.net>,
Sasha Levin <sashal@...nel.org>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: Question: Patch:("net: sched: cbq: dont intepret cls results when
asked to drop") may be not bug for branch LTS 5.10
On 2023/1/19 14:33, shaozhengchao wrote:
> Hi Jamal:
>
> On 2023/1/18 21:06, Jamal Hadi Salim wrote:
>> The reproducer the Kyle included back then was not useful - it seemed
>> like a cutnpaste
>> from some syzkaller dashboard (and didnt compile); however, for this
>> one issue,
>> you can reproduce the problem by creating the infinite loop setup that
>> Davide describes.
>>
>> The main issue is bigger than tcf_classify: It has to do with
>> interpretation of tcf_result
>> and the return codes.
>> I reviewed all consumers of tcf_results and only 3 (all happened to be
>> qdiscs)
>> were fixed in that patch set. Note consumers include all objects in
>> the hierarchy
>> including classifiers and action.
>>
>> Typically, the LinuxWay(tm) of cutting and pasting what other people
>> before you
>> did works - but sometimes people forget environmental rules even when
>> they are
>> documented. The main environmental rule that was at stake here is the
>> return
>> (verdict) code said to drop the packet. The validity of tcf_result in
>> such a case is
>> questionable and setting it to 0 was irrelevant. So that is all the
>> fix had to do for -net.
>>
>> The current return code is a "verdict" on what happened. Given that
>> there is potential
>> to misinterpret - as was seen here - a safer approach is to get the
>> return code to be either
>> an error/success code(eg ELOOP for the example being quoted) since
>> that is a more
>> common pattern and we store the "verdict code" in tcf_result
>> (TC_ACT_SHOT).
>> I was planning to send an RFC patch for that.
>>
>> I am still not clear on the correlation that Zhengchao Shao was making
>> between
>> Davide's patch and this issue...
>>
> I'm just looking for the specific possible root cause of the issue.
> Please help check whether the possible causes are as follows:
> 1. __tcf_classify returns TC_ACT_UNSPEC,tc_skb_ext_alloc allocation
> failure, and the res may be abnormal. Maybe fix commit:9410c9409d3e
> ("net: sched: Introduce ingress classification function")
> 2.tcf_chain_lookup_rcu return NULL,and tcf_classify will return
> TC_ACT_SHOT. In this way, res is abnormal. Oh, I am sorry. In
> cbq_classify, pass NULL as block. So tcf_chain_lookup_rcu will not be
> called in tcf_classify. Ignore this.
>
> Thank you again for your careful answer.
>
> Zhengchao Shao
Last I just wonder whether the fix tag of that patch is corret?
whether the issue could be triggered by other paths except Kyle
describe.
How do you know it's fixed Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2").
Thank you.
>> cheers,
>> jamal
>>
>>
>> On Wed, Jan 18, 2023 at 6:06 AM Davide Caratti <dcaratti@...hat.com>
>> wrote:
>>>
>>> hello,
>>>
>>> On Tue, Jan 17, 2023 at 05:10:58PM -0700, Kyle Zeng wrote:
>>>> Hi Zhengchao,
>>>>
>>>> I'm the finder of the vulnerability. In my initial report, there was a
>>>> more detailed explanation of why this bug happened. But it got left
>>>> out in the commit message.
>>>> So, I'll explain it here and see whether people want to patch the
>>>> actual root cause of the crash.
>>>>
>>>> The underlying bug that this patch was trying to address is actually
>>>> in `__tcf_classify`. Notice that `struct tcf_result` is actually a
>>>> union type, so whenever the kernel sets res.goto_tp, it also sets
>>>> res.class.
>>>
>>> From what I see/remember, 'res' (struct tcf_result) is unassigned
>>> unless the packet is matched by a classifier (i.e. it does not return
>>> TC_ACT_UNSPEC).
>>>
>>> When this match happens (__tcf_classify returns non-negative) and the
>>> control action says TC_ACT_GOTO_CHAIN, res->goto_tp is written.
>>> Like you say, 'res.class' is written as well because it's a union.
>>>
>>>> And this can happen inside `tcf_action_goto_chain_exec`. In
>>>> other words, `tcf_action_goto_chain_exec` will set res.class. Notice
>>>> that goto_chain can point back to itself, which causes an infinite
>>>> loop. To avoid the infinite loop situation, `__tcf_classify` checks
>>>> how many times the loop has been executed
>>>> (https://elixir.bootlin.com/linux/v6.1/source/net/sched/cls_api.c#L1586),
>>>> if it is more than a specific number, it will mark the result as
>>>> TC_ACT_SHOT and then return:
>>>>
>>>> if (unlikely(limit++ >= max_reclassify_loop)) {
>>>> ...
>>>> return TC_ACT_SHOT;
>>>> }
>>>
>>> maybe there is an easier reproducer, something made of 2 TC actions.
>>> The first one goes to a valid chain, and then the second one
>>> (executed from
>>> within the chain) drops the packet. I think that unpatched CBQ scheduler
>>> will find 'res.class' with a value also there.
>>>
>>>> However, when it returns in the infinite loop handler, it forgets to
>>>> clear whatever is in the `res` variable, which still holds pointers in
>>>> `goto_tp`. As a result, cbq_classify will think it is a valid
>>>> `res.class` and causes type confusion.
>>>>
>>>> My initial proposed patch was to memset `res` before `return
>>>> TC_ACT_SHOT` in `__tcf_classify`, but it didn't get merged. But I
>>>> guess the merged patch is more generic.
>>>
>>> The merged patch looks good to me; however, I wonder if it's sufficient.
>>> If I well read the code, there is still the possibility of hitting the
>>> same problem on a patched kernel when TC_ACT_TRAP / TC_ACT_STOLEN is
>>> returned after a 'goto chain' when the qdisc is CBQ.
>>>
>>> I like Jamal's idea of sharing the reproducer :)
>>
>>
>>> thanks!
>>> --
>>> davide
>>>
>>>
>>>
>>
Powered by blists - more mailing lists