[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220910180917.plhtjt3lp7b6wlb5@mobilestation>
Date: Sat, 10 Sep 2022 21:09:17 +0300
From: Serge Semin <fancer.lancer@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: Serge Semin <Sergey.Semin@...kalelectronics.ru>,
Jonathan Derrick <jonathan.derrick@...el.com>,
Revanth Rajashekar <revanth.rajashekar@...el.com>,
Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Jens Axboe <axboe@...com>, Sagi Grimberg <sagi@...mberg.me>,
Guenter Roeck <linux@...ck-us.net>,
Alexey Malahov <Alexey.Malahov@...kalelectronics.ru>,
Pavel Parkhomenko <Pavel.Parkhomenko@...kalelectronics.ru>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] nvme-hwmon: Cache-line-align the NVME SMART
log-buffer
On Sat, Sep 10, 2022 at 03:35:45PM +0300, Serge Semin wrote:
> On Sat, Sep 10, 2022 at 07:30:45AM +0200, Christoph Hellwig wrote:
> > I think this will work, but unless we have to I'd generally prefer
> > to just split dta that is DMAed into into a separate allocation.
> > That is, do a separate kmalloc for the nvme_smart_log structure.
>
> Well, both approaches will solve the denoted problem. I am just
> wondering why do you think that the kmalloc-ed buffer is more
> preferable? IMO it is a bit less suitable since increases the memory
> granularity - two kmalloc's instead of one. Moreover it makes the code
^
`-- I meant fragmentation of course...
> a bit more complex for the same reason of having two mallocs and two
> frees. Meanwhile using the ____cacheline_aligned qualifier to prevent
> the noncoherent DMA problem is a standard approach.
>
> What would be the best solution if we had a qualifier like this:
> #ifdef CONFIG_DMA_NONCOHERENT
> #define ____dma_buffer ____cacheline_aligned
> #else
> #define ____dma_buffer
> #endif
> and used it instead of the direct ____cacheline_aligned utilization.
>
> -Sergey
>
> >
> > Guenter, is this ok with you?
Powered by blists - more mailing lists