lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2342925.1742250364@warthog.procyon.org.uk>
Date: Mon, 17 Mar 2025 22:26:04 +0000
From: David Howells <dhowells@...hat.com>
To: Viacheslav Dubeyko <Slava.Dubeyko@....com>
Cc: dhowells@...hat.com, Alex Markuze <amarkuze@...hat.com>,
    "slava@...eyko.com" <slava@...eyko.com>,
    "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
    "idryomov@...il.com" <idryomov@...il.com>,
    "jlayton@...nel.org" <jlayton@...nel.org>,
    "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
    "ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>,
    "dongsheng.yang@...ystack.cn" <dongsheng.yang@...ystack.cn>,
    "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 11/35] ceph: Use ceph_databuf in DIO

Viacheslav Dubeyko <Slava.Dubeyko@....com> wrote:

> > +					    ITER_GET_BVECS_PAGES, &start);
> > +		if (bytes < 0) {
> > +			if (size == 0)
> > +				return bytes;
> > +			break;
> 
> I am slightly confused by 'break;' here. Do we have a loop around?

Yes.  You need to look at the original code as the while-directive didn't make
it into the patch context;-).

> > -	return size;
> > +	return 0;
> 
> Do we really need to return zero here? It looks to me that we calculated the
> size for returning here. Am I wrong?

The only caller only cares if an error is returned.  It doesn't actually care
about the size.  The size is stored in the databuf anyway.

> > +		dbuf = ceph_databuf_req_alloc(npages, 0, GFP_KERNEL);
> 
> I am still feeling confused of allocated npages of zero size. :)

That's not what it's saying.  It's allocating npages' worth of bio_vec[] and
not creating any bufferage.  The bio_vecs will be loaded from a DIO request.
As mentioned in a previous reply, it might be worth creating a separate
databuf API call for this case.

> > -static void put_bvecs(struct bio_vec *bvecs, int num_bvecs, bool should_dirty)
> > +static void ceph_dirty_pages(struct ceph_databuf *dbuf)
> 
> Does it mean that we never used should_dirty argument with false value? Or the
> main goal of this method is always making the pages dirty?
> 
> >  {
> > +	struct bio_vec *bvec = dbuf->bvec;
> >  	int i;
> >  
> > -	for (i = 0; i < num_bvecs; i++) {
> > -		if (bvecs[i].bv_page) {
> > -			if (should_dirty)
> > -				set_page_dirty_lock(bvecs[i].bv_page);
> > -			put_page(bvecs[i].bv_page);
> 
> So, which code will put_page() now?

The dirtying of pages is split from the putting of those pages.  The databuf
releaser puts the pages, but doesn't dirty them.  ceph_aio_complete_req()
needs to do that itself.  Netfslib does this on behalf of the filesystem and
switching to that will delegate the responsibility.

Also in future, netfslib will handle putting the page refs or unpinning the
pages as appropriate - and ceph should not then take refs on those pages
(indeed, as struct page is disintegrated into different types such as folios,
there may not even *be* a ref counter on some of the pages).

David


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ