[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <79074EDC-A60E-45CA-968C-027E194F7927@intel.com>
Date: Sat, 17 Mar 2012 09:05:55 +0800
From: Alek Du <alek.du@...el.com>
To: Jiri Slaby <jslaby@...e.cz>
CC: Jiri Slaby <jirislaby@...il.com>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
"Tu, Xiaobing" <xiaobing.tu@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"Zhang, Yanmin" <yanmin.zhang@...el.com>,
"Zuo, Jiao" <jiao.zuo@...el.com>
Subject: Re: [PATCH] tty: hold lock across tty buffer finding and buffer filling
> On 03/16/2012 02:50 PM, Du, Alek wrote:
>> For #1, as I said, we already held the tty refcount. The tty refcount cannot help this crash.
>
> Where exactly? Do you have some local changes?
> $ grep tty_port_tty_ drivers/tty/serial/mfd.c
> $
Yes, we do have local changes not upstream yet. But you should know now, TTY recount is not the key of the spin lock hole.
>
>> For #2, this patch will avoid memcpy to a NULL pointer and avoid a kernel crash.
>
> Ok, could you explain how? Linearized code from moxa.c with your patch
> applied follows:
> ofs = baseAddr + DynPage_addr + bufhead + head;
> size = (tail >= head) ? (tail - head) : (rx_mask + 1 - head);
> size = min(size, count);
> // call to tty_prepare_flip_string
> spin_lock_irqsave(&tty->buf.lock, flags);
> space = __tty_buffer_request_room(tty, size);
>
> tb = tty->buf.tail;
> if (likely(space)) {
> *chars = tb->char_buf_ptr + tb->used;
> memset(tb->flag_buf_ptr + tb->used, TTY_NORMAL, space);
> tb->used += space;
> }
> spin_unlock_irqrestore(&tty->buf.lock, flags);
> // return to MoxaPortReadData
> memcpy_fromio(*chars, ofs, space); <- memcpy without buf.lock
Ok. This is true. Seems we need another patch to remove this prepare interface. And let callers use the protected interface.
Thanks,
Alek
> head = (head + len) & rx_mask;
> ...
>
> thanks,
> --
> js
> suse labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists