[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <t2r412e6f7f1004241931ie9e70e3aka77b49557a7872e3@mail.gmail.com>
Date: Sun, 25 Apr 2010 10:31:33 +0800
From: Changli Gao <xiaosuo@...il.com>
To: hadi@...erus.ca
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Rick Jones <rick.jones2@...com>,
David Miller <davem@...emloft.net>, therbert@...gle.com,
netdev@...r.kernel.org, robert@...julf.net, andi@...stfloor.org
Subject: Re: rps perfomance WAS(Re: rps: question
On Thu, Apr 22, 2010 at 8:12 PM, jamal <hadi@...erus.ca> wrote:
>
>> I see slave/application cpus hit _raw_spin_lock_irqsave() and
>> _raw_spin_unlock_irqrestore().
>>
>> Maybe a ring buffer could help (instead of a double linked queue) for
>> backlog, or the double queue trick, if Changli wants to respin his
>> patch.
>>
>
> Ok, I will have some cycles later today/tommorow or for sure on weekend.
> My setup is still intact - so i can test.
>
I read the code again, and find that we don't use spin_lock_irqsave(),
and we use local_irq_save() and spin_lock() instead, so
_raw_spin_lock_irqsave() and _raw_spin_lock_irqrestore() should not be
related to backlog. the lock maybe sk_receive_queue.lock.
Jamal, did you use a single socket to serve all the clients?
BTW: completion_queue and output_queue in softnet_data both are LIFO
queues. For completion_queue, FIFO is better, as the last used skb is
more likely in cache, and should be used first. Since slab has always
cache the last used memory at the head, we'd better free the skb in
FIFO manner. For output_queue, FIFO is good for fairness among qdiscs.
--
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists