lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 19 Jun 2019 16:52:02 -0400
From:   Joel Fernandes <joelaf@...gle.com>
To:     Saravana Kannan <saravanak@...gle.com>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Tri Vo <trong@...roid.com>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Sandeep Patil <sspatil@...roid.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Hridya Valsaraju <hridya@...gle.com>,
        Linux PM <linux-pm@...r.kernel.org>,
        "Cc: Android Kernel" <kernel-team@...roid.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Alexei Starovoitov <ast@...com>
Subject: Re: Alternatives to /sys/kernel/debug/wakeup_sources

On Wed, Jun 19, 2019 at 4:41 PM 'Saravana Kannan' via kernel-team
<kernel-team@...roid.com> wrote:
> > > On Wed, Jun 19, 2019, 11:55 AM 'Joel Fernandes' via kernel-team <kernel-team@...roid.com> wrote:
> > >>
> > >> On Wed, Jun 19, 2019 at 2:35 PM Greg Kroah-Hartman
> > >> <gregkh@...uxfoundation.org> wrote:
> > >> >
> > >> > On Wed, Jun 19, 2019 at 02:01:36PM -0400, Joel Fernandes wrote:
> > >> > > On Wed, Jun 19, 2019 at 1:07 PM Greg Kroah-Hartman
> > >> > > <gregkh@...uxfoundation.org> wrote:
> > >> > > >
> > >> > > > On Wed, Jun 19, 2019 at 12:53:12PM -0400, Joel Fernandes wrote:
> > >> > > > > > It is conceivable to have a "wakeup_sources" directory under
> > >> > > > > > /sys/power/ and sysfs nodes for all wakeup sources in there.
> > >> > > > >
> > >> > > > > One of the "issues" with this is, now if you have say 100 wake up
> > >> > > > > sources, with 10 entries each, then we're talking about a 1000 sysfs
> > >> > > > > files. Each one has to be opened, and read individually. This adds
> > >> > > > > overhead and it is more convenient to read from a single file. The
> > >> > > > > problem is this single file is not ABI. So the question I guess is,
> > >> > > > > how do we solve this in both an ABI friendly way while keeping the
> > >> > > > > overhead low.
> > >> > > >
> > >> > > > How much overhead?  Have you measured it, reading from virtual files is
> > >> > > > fast :)
> > >> > >
> > >> > > I measured, and it is definitely not free. If you create and read a
> > >> > > 1000 files and just return a string back, it can take up to 11-13
> > >> > > milliseconds (did not lock CPU frequencies, was just looking for
> > >> > > average ball park). This is assuming that the counter reading is just
> > >> > > doing that, and nothing else is being done to return the sysfs data
> > >> > > which is probably not always true in practice.
> > >> > >
> > >> > > Our display pipeline deadline is around 16ms at 60Hz. Conceivably, any
> > >> > > CPU scheduling competion reading sysfs can hurt the deadline. There's
> > >> > > also the question of power - we definitely have spent time in the past
> > >> > > optimizing other virtual files such as /proc/pid/smaps for this reason
> > >> > > where it spent lots of CPU time.
> > >> >
> > >> > smaps was "odd", but that was done after measurements were actually made
> > >> > to prove it was needed.  That hasn't happened yet :)
> > >> >
> > >> > And is there a reason you have to do this every 16ms?
> > >>
> > >> Not every, I was just saying whenever it happens and a frame delivery
> > >> deadline is missed, then a frame drop can occur which can result in a
> > >> poor user experience.
> > >
> > >
> > > But this is not done in the UI thread context. So some thread running for more than 16ms shouldn't cause a frame drop. If it does, we have bigger problems.
> > >
> >
> > Not really. That depends on the priority of the other thread and other
> > things. It can obviously time share the same CPU as the UI thread if
> > it is not configured correctly. Even with CFS it can reduce the time
> > consumed by other "real-time" CFS threads. I am not sure what you are
> > proposing, there are also (obviously) power issues with things running
> > for long times pointlessly. We should try to do better if we can. As
> > Greg said, some study/research can be done on the use case before
> > settling for a solution (sysfs or other).
> >
>
> Agree, power and optimization is good. Just saying that the UI example
> is not a real one. If the UI thread is that poorly configured that
> some thread running for a second can cause frame drops in a multicore
> system, that's a problem with the UI framework design.

We do know that historically there are problems with the UI thread's
scheduling and folks are looking into DL scheduling for that. I was
just giving UI thread as an example, there are also other low latency
threads (audio etc). Anyway, I think we know the next steps here so we
can park this discussion for now.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ