lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgKD6X9eiADTnvIi@alley>
Date:   Tue, 8 Feb 2022 15:53:29 +0100
From:   Petr Mladek <pmladek@...e.com>
To:     John Ogness <john.ogness@...utronix.de>
Cc:     Hillf Danton <hdanton@...a.com>, linux-kernel@...r.kernel.org,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH printk v1 10/13] printk: add kthread console printers

On Tue 2022-02-08 12:14:04, John Ogness wrote:
> On 2022-02-08, Hillf Danton <hdanton@...a.com> wrote:
> > On Mon,  7 Feb 2022 20:49:20 +0106 John Ogness wrote:
> >> Create a kthread for each console to perform console printing. During
> >> normal operation (@system_state == SYSTEM_RUNNING), the kthread
> >> printers are responsible for all printing on their respective
> >> consoles.
> >
> > The code becomes simpler if the kthread is replaced with a workqueue
> > work, given the system workers.
> 
> We do not want multiple console printers blocking each other. We also
> would not want the consoles blocking non-printk work items for extended
> periods of time. So we would likely need a dedicated worker per
> console. I'm not convinced the code would look simpler just by changing
> the deferred work API. But perhaps the kernel community wants to get
> away from manual kthread management in general. (??)
>
> I am now pondering if there is some technical advantages to
> workqueues. I suppose with dedicated workers, you could still easily
> adjust CPU affinities and scheduling policies/priorites from userspace,
> like with kthreads. But is there some advantage over kthreads?

It is true that custom kthread implementation is a bit tricky.
Especially it is easy to create races. Also they are often
not optimal from the resources POV.

But I do not see any big advantage in this case. It would likely
bring more problems than it would solve.

Note that many people rely on debugging kernel via printk() and
consoles. Non-working consoles switch the system into a black box.

My concern is:

  + Workqueues code is very tricky and a lot of operations require
    pool->lock. There is even a hook from the scheduler. It would be
    yet another hard-to-debug layer when messages do not reach
    console. Another source of problems when handling consoles during
    panic. Another source of recursive messages.

  + As John already suggested. We would require rescue kthreads.
    It means that it won't safe any resources. Also rescuers are woken
    via the last idle worker and timer, using another tricky code.

  + There might be real problems to see messages on overloaded
    system. Workqueues have very limited support how to deal with
    it. There are basically only few possibilities: per-CPU workers
    vs. workers limited by a CPU-mask. Normal vs. hi-priority
    workers (nothing in between). The priority could not be
    modified at runtime.


A more realistic approach would be using the kthread_worker API.
It was designed for users that need dedicated kthreads, custom
scheduling policy and priority. It provides a generic loop. The work
items are queued the same way as in the workqueues API.

My mine concern is that the kthread_worker API still uses an internal
lock. And it is yet another layer that might be hard to debug when
printk() does not work.

I would prefer using a custom kthread for now. The main reason
is that it is waken using wake_up_process(). We will have
to deal with very similar problems as with console_lock
(semaphore locking) that uses wake_up_process() as well.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ