lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180110163134.GG13338@ZenIV.linux.org.uk>
Date:   Wed, 10 Jan 2018 16:31:35 +0000
From:   Al Viro <viro@...IV.linux.org.uk>
To:     Christoph Hellwig <hch@....de>
Cc:     Avi Kivity <avi@...lladb.com>, linux-aio@...ck.org,
        linux-fsdevel@...r.kernel.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/31] fs: introduce new ->get_poll_head and ->poll_mask
 methods

On Mon, Jan 08, 2018 at 11:45:13AM +0100, Christoph Hellwig wrote:
> On Sat, Jan 06, 2018 at 07:12:42PM +0000, Al Viro wrote:
> > On Thu, Jan 04, 2018 at 09:00:15AM +0100, Christoph Hellwig wrote:
> > > ->get_poll_head returns the waitqueue that the poll operation is going
> > > to sleep on.  Note that this means we can only use a single waitqueue
> > > for the poll, unlike some current drivers that use two waitqueues for
> > > different events.  But now that we have keyed wakeups and heavily use
> > > those for poll there aren't that many good reason left to keep the
> > > multiple waitqueues, and if there are any ->poll is still around, the
> > > driver just won't support aio poll.
> > 
> > *UGH*
> > 
> > Gotta love the optimism, but have you actually done the conversion?
> > I'm particularly suspicious about the locking rules here...
> 
> I've done just about everything but random drivers.  Which is the ones
> where people care about performance and thus aio poll.  I suspect that
> we will have various odd cruft drivers that will be left alone.

*snort*

Seeing that random drivers are, by far, the majority of instances...
What I wonder is how many of them conform to that pattern and how
many can be massaged to that form.

How painful would it be, to pick an instance with more than one wait
queue involved, to convert drivers/char/random.c to that form?

FWIW, I agree that it's very common.  Which makes the departures from
that pattern worth looking into - they might be buggy.  And "more than
one queue to wait on" is not all - there's also e.g.
static unsigned int vtpm_proxy_fops_poll(struct file *filp, poll_table *wait)
{
        struct proxy_dev *proxy_dev = filp->private_data;
        unsigned ret;

        poll_wait(filp, &proxy_dev->wq, wait);

        ret = POLLOUT;

        mutex_lock(&proxy_dev->buf_lock);

        if (proxy_dev->req_len)
                ret |= POLLIN | POLLRDNORM;

        if (!(proxy_dev->state & STATE_OPENED_FLAG))
                ret |= POLLHUP;

        mutex_unlock(&proxy_dev->buf_lock);

        return ret;
} 
(mainline drivers/char/tpm/tpm_vtpm_proxy.c)

Is that mutex_lock() in there a bug?  Another fun case is dma_buf_poll()...

The reason why I went looking at ->poll() in the first place had been
recurring bugs in the instances; e.g. "oh, I've got something odd,
let's return -Esomething".  Or "was it POLLIN or POLL_IN?"

I'd really like to get the interfaces right and get rid of the bitrot
source; turning it into moldering corpse in the corner is fine, as
long as we have a realistic chance of getting rid of that body in not
too distant future...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ