lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57BCCEF6.3090405@gmail.com>
Date:   Tue, 23 Aug 2016 15:32:22 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     jhs@...atatu.com, davem@...emloft.net, brouer@...hat.com,
        xiyou.wangcong@...il.com, alexei.starovoitov@...il.com,
        john.r.fastabend@...el.com, netdev@...r.kernel.org
Subject: Re: [net-next PATCH 02/15] net: sched: allow qdiscs to handle locking

On 16-08-23 02:08 PM, Eric Dumazet wrote:
> On Tue, 2016-08-23 at 13:23 -0700, John Fastabend wrote:
>> This patch adds a flag for queueing disciplines to indicate the
>> stack does not need to use the qdisc lock to protect operations.
>> This can be used to build lockless scheduling algorithms and
>> improving performance.
>> 

[...]

>> * Heuristic to force contended enqueues to serialize on a *
>> separate lock before trying to get qdisc main lock. @@ -3898,19
>> +3913,22 @@ static void net_tx_action(struct softirq_action *h)
>> 
>> while (head) { struct Qdisc *q = head; -			spinlock_t *root_lock; +
>> spinlock_t *root_lock = NULL;
>> 
>> head = head->next_sched;
>> 
>> -			root_lock = qdisc_lock(q); -			spin_lock(root_lock); +			if
>> (!(q->flags & TCQ_F_NOLOCK)) { +				root_lock = qdisc_lock(q); +
>> spin_lock(root_lock); +			} /* We need to make sure
>> head->next_sched is read * before clearing __QDISC_STATE_SCHED */ 
>> smp_mb__before_atomic(); clear_bit(__QDISC_STATE_SCHED,
>> &q->state); qdisc_run(q); -			spin_unlock(root_lock); +			if
>> (!(q->flags & TCQ_F_NOLOCK))
> 
> This might be faster to use : if (root_lock) (one less memory read
> and mask)
> 

hmm this actually gets factored out in patch 12 but I'll go ahead
and make this change and then I think it reads a bit better through
the series.

>> +				spin_unlock(root_lock); } } } diff --git
>> a/net/sched/sch_generic.c b/net/sched/sch_generic.c index
>> e305a55..af32418 100644 --- a/net/sched/sch_generic.c +++
>> b/net/sched/sch_generic.c @@ -170,7 +170,8 @@ int
>> sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, int ret =
>> NETDEV_TX_BUSY;
>> 
>> /* And release qdisc */ -	spin_unlock(root_lock); +	if (!(q->flags
>> & TCQ_F_NOLOCK)) +		spin_unlock(root_lock);
> 
> You might use the same trick, if root_lock is NULL for lockless
> qdisc.

So what I just did is pass NULL into sch_direct_xmit() for root_lock
when the qdisc is lockless. This replaces the qdisc flags checks in this
call to checking root_lock.

Seems like a nice cleanup/optimization. I'll wait a bit and then push
it in v2 after giving folks a day or two to review this set.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ