[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <616c8221-fc6e-1b73-626c-3427c87dddbf@linux.intel.com>
Date: Sat, 25 Jun 2022 00:09:33 +0300 (EEST)
From: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
To: Jiri Slaby <jirislaby@...nel.org>
cc: linux-serial <linux-serial@...r.kernel.org>,
Greg KH <gregkh@...uxfoundation.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
William Hubbs <w.d.hubbs@...il.com>,
Chris Brannon <chris@...-brannons.com>,
Kirk Reiser <kirk@...sers.ca>,
Samuel Thibault <samuel.thibault@...-lyon.org>,
"David S. Miller" <davem@...emloft.net>,
linux-mips@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
speakup@...ux-speakup.org, sparclinux@...r.kernel.org
Subject: Re: [PATCH v2 6/6] serial: Consolidate BOTH_EMPTY use
On Thu, 23 Jun 2022, Jiri Slaby wrote:
> > --- a/arch/mips/ath79/early_printk.c
> > +++ b/arch/mips/ath79/early_printk.c
> > @@ -29,15 +30,15 @@ static inline void prom_putchar_wait(void __iomem *reg,
> > u32 mask, u32 val)
> > } while (1);
> > }
> > -#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
> > -
> > static void prom_putchar_ar71xx(char ch)
> > {
> > void __iomem *base = (void __iomem *)(KSEG1ADDR(AR71XX_UART_BASE));
> > - prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY);
> > + prom_putchar_wait(base + UART_LSR * 4, UART_LSR_BOTH_EMPTY,
> > + UART_LSR_BOTH_EMPTY);
> > __raw_writel((unsigned char)ch, base + UART_TX * 4);
> > - prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY);
> > + prom_putchar_wait(base + UART_LSR * 4, UART_LSR_BOTH_EMPTY,
> > + UART_LSR_BOTH_EMPTY);
>
> Two observations apart from this patch:
> * prom_putchar_wait()'s last two parameters are always the same.
> One should be removed, i.e. all this simplified.
I noticed this myself but I'm also looking into generalizing wait for tx
empty somehow if possible (it might not help much here though as this
seems to be on "early" side of things).
--
i.
> * prom_putchar_wait() should be implemented using
> read_poll_timeout_atomic(), incl. failure/timeout handling.
>
> thanks,
>
Powered by blists - more mailing lists