[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190115130536.GA28364@lst.de>
Date: Tue, 15 Jan 2019 14:05:36 +0100
From: Christoph Hellwig <hch@....de>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Joerg Roedel <joro@...tes.org>,
Jason Wang <jasowang@...hat.com>, Jens Axboe <axboe@...nel.dk>,
virtualization@...ts.linux-foundation.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
iommu@...ts.linux-foundation.org, jfehlig@...e.com,
jon.grimm@....com, brijesh.singh@....com, hch@....de,
Joerg Roedel <jroedel@...e.de>
Subject: Re: [PATCH 1/3] swiotlb: Export maximum allocation size
On Mon, Jan 14, 2019 at 04:59:27PM -0500, Michael S. Tsirkin wrote:
> On Mon, Jan 14, 2019 at 03:49:07PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 11, 2019 at 10:12:31AM +0100, Joerg Roedel wrote:
> > > On Thu, Jan 10, 2019 at 12:02:05PM -0500, Konrad Rzeszutek Wilk wrote:
> > > > Why not use swiotlb_nr_tbl ? That is how drivers/gpu/drm use to figure if they
> > > > need to limit the size of pages.
> > >
> > > That function just exports the overall size of the swiotlb aperture, no?
> > > What I need here is the maximum size for a single mapping.
> >
> > Yes. The other drivers just assumed that if there is SWIOTLB they would use
> > the smaller size by default (as in they knew the limitation).
> >
> > But I agree it would be better to have something smarter - and also convert the
> > DRM drivers to piggy back on this.
> >
> > Or alternatively we could make SWIOTLB handle bigger sizes..
>
>
> Just a thought: is it a good idea to teach blk_queue_max_segment_size
> to get the dma size? This will help us find other devices
> possibly missing this check.
Yes, we should. Both the existing DMA size communicated through dma_params
which is set by the driver, and this new DMA-ops exposed one which needs
to be added. I'm working on some preliminary patches for the first part,
as I think I introduced a bug related to that in the SCSI layer in 5.0..
Powered by blists - more mailing lists