lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB5PR05MB1895366903DAE0BAE20854D5AC170@DB5PR05MB1895.eurprd05.prod.outlook.com>
Date:   Fri, 12 Jan 2018 10:26:24 +0000
From:   Nogah Frankel <nogahf@...lanox.com>
To:     Jakub Kicinski <kubakici@...pl>
CC:     Yuval Mintz <yuvalm@...lanox.com>, Jiri Pirko <jiri@...nulli.us>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Ido Schimmel <idosch@...lanox.com>, mlxsw <mlxsw@...lanox.com>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>
Subject: RE: [patch net-next 5/5] mlxsw: spectrum: qdiscs: Support stats for
 PRIO qdisc

> > > > > > Hm.  You you need this just because you didn't add the backlog
> > > > > > pointer to destroy?  AFAIK on destroy we are free to reset
> > > > > > stats as well, thus simplifying your driver...  Let me know
> > > > > > if I misunderstand.
> >
> > The problem of doing it in destroy is when one qdisc is replacing
> > another. I want to be able to destroy the old qdisc to "make room"
> > for the new one before I get the destroy command for the old qdisc
> > (that will come just after the replace command for the new qdisc).
> > If I am saying that the destroy changes the stats, I need to save
> > some data about the old qdisc till I get the destroy command for it.
> 
> Agreed, maintaining a coherent destroy behavior would be problematic
> when successful replace with a new qdisc (e.g. different handle) is
> involved :(
> 
> Besides the momentary stats seem to be reset before destroy so not
> touching them may be in fact more correct.  I need to look into the
> propagation done in qdisc_tree_reduce_backlog(), it worries me.  If
> we start stacking the qdiscs (e.g. red on top of prio) it could mess
> with the root's backlog...

I think it can be solved in the driver, or by using the OFFLOAD flag to
avoid this backlog reduce for offloaded qdiscs.

It might make sense in some point to separate the HW statistics from the
SW, at least for the momentary ones.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ