lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080813061317.GA4251@ff.dom.local>
Date:	Wed, 13 Aug 2008 06:13:17 +0000
From:	Jarek Poplawski <jarkao2@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	herbert@...dor.apana.org.au, netdev@...r.kernel.org
Subject: Re: [PATCH] pkt_sched: Destroy gen estimators under rtnl_lock().

On Tue, Aug 12, 2008 at 10:11:03PM -0700, David Miller wrote:
> From: Herbert Xu <herbert@...dor.apana.org.au>
> Date: Wed, 13 Aug 2008 14:30:19 +1000
> 
> > David Miller <davem@...emloft.net> wrote:
> > >
> > > The qdisc pointer traverses to the softirq handler, which can be run
> > > in a process context (via ksoftirqd), and this pointer gets there
> > > via the per-cpu ->output_queue.
> > 
> > Here are two possible solutions:
> > 
> > 1) The active way: smp_call_function and forcibly remove the qdiscs
> > in question from each output_queue.
> > 
> > 2) The passive way: Make dev_deactive call yield() until no qdisc's
> > are on an output_queue.  This assumes there is some sort of dead
> > flag detection on the output_queue side so it doesn't keep going
> > forever.
> 
> Yes, we'll need some kind of dead flag it seems.
> 
> Another thing we can do is, in the yield loop, grabbing the
> __QDISC_STATE_RUNNING bit.
> 
> But actually, I think Jarek has a point.
> 
> The existing loop there in dev_deactivate() should work _iff_ we make
> it look at the proper qdisc.
> 
> This is another case where I didn't transform things correctly.  The
> old code worked on dev->state since that's where we kept what used
> to be __LINK_STATE_QDISC_RUNNING.
> 
> But now that state is kept in the qdisc itself.  But we just zapped
> the active qdisc, so the old one is in ->qdisc_sleeping not ->qdisc.
> 
> So, just like one of Jarek's patches, we should simply change
> dev_queue->qdisc into dev_queue->qdisc_sleeping and that should
> take care of the bulk of the issues.
> 
> Shouldn't it?

If we don't change anything in __netif_schedule() I doubt it's enough.
And if this old way of waiting for "outstanding qdisc-less" calls was
really needed we should probably wait for both qdisc and qdisc_sleeping
then.

> 
> Hmmm... maybe we have to sample __QDISC_STATE_SCHED too.

We could probably even think of using this flag napi_disable() way.

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ