lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <n2t412e6f7f1004130553x85452fc0u22e512cad412abd3@mail.gmail.com>
Date:	Tue, 13 Apr 2010 20:53:42 +0800
From:	Changli Gao <xiaosuo@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH v2] net: batch skb dequeueing from softnet input_pkt_queue

On Tue, Apr 13, 2010 at 6:19 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mardi 13 avril 2010 à 17:50 +0800, Changli Gao a écrit :
>> On Tue, Apr 13, 2010 at 4:08 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> >
>> >        Probably not necessary.
>> >
>> >> +     volatile bool           flush_processing_queue;
>> >
>> > Use of 'volatile' is strongly discouraged, I would say, forbidden.
>> >
>>
>> volatile is used to avoid compiler optimization.
>
> volatile might be used on special macros only, not to guard a variable.
> volatile was pre SMP days. We need something better defined these days.
>

flush_processing_queue is only accessed on the same CPU, so no
volatile is needed. I'll remove it in the next version.

>> >> @@ -2803,6 +2808,7 @@ static void flush_backlog(void *arg)
>> >>                       __skb_unlink(skb, &queue->input_pkt_queue);
>> >>                       kfree_skb(skb);
>> >>               }
>> >> +     queue->flush_processing_queue = true;
>> >
>> >        Probably not necessary
>> >
>>
>> If flush_backlog() is called when there are still packets in
>> processing_queue, there maybe some packets refer to the netdev gone,
>> if we remove this line.
>
> We dont need this "processing_queue". Once you remove it, there is no
> extra work to perform.

OK. If we make processing_queue is a stack variable. When quota or
jiffies limit is reached, we have to splice processing_queue back to
input_pkt_queue. If flush_backlog() is called before the
processing_queue is spliced, there will still packets which refer to
the NIC going. Then these packets are queued to input_pkt_queue. When
process_backlog() is called again, the dev field of these skbs are
wild...

Oh, my GOD. When RPS is enabled, if flush_backlog(eth0) is called on
CPU1 when a skb0(eth0) is dequeued from CPU0's softnet and isn't
queued to CPU1's softnet, what will happen?

>
>> >
>> >>
>> >
>> > I advise to keep it simple.
>> >
>> > My suggestion would be to limit this patch only to process_backlog().
>> >
>> > Really if you touch other areas, there is too much risk.
>> >
>> > Perform sort of skb_queue_splice_tail_init() into a local (stack) queue,
>> > but the trick is to not touch input_pkt_queue.qlen, so that we dont slow
>> > down enqueue_to_backlog().
>> >
>> > Process at most 'quota' skbs (or jiffies limit).
>> >
>> > relock queue.
>> > input_pkt_queue.qlen -= number_of_handled_skbs;
>> >
>>
>> Oh no, in order to let latter packets in as soon as possible, we have
>> to update qlen immediately.
>>
>
> Absolutely not. You missed something apparently.
>
> You pay the price at each packet enqueue, because you have to compute
> the sum of two lengthes, and guess what, if you do this you have a cache
> line miss in one of the operand. Your patch as is is suboptimal.
>
> Remember : this batch mode should not change packet queueing at all,
> only speed it because of less cache line misses.
>

WoW, is it really so expensive?

-- 
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ