[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOi1vP8EpxFdi_6Twad7wzCQnBLks0d6mbsrX=AVp64_AxR=OQ@mail.gmail.com>
Date: Thu, 4 Feb 2016 10:01:31 +0100
From: Ilya Dryomov <idryomov@...il.com>
To: Arnd Bergmann <arnd@...db.de>
Cc: "Yan, Zheng" <zyan@...hat.com>,
Deepa Dinamani <deepa.kernel@...il.com>,
Zheng Yan <ukernel@...il.com>, linux-fsdevel@...r.kernel.org,
y2038@...ts.linaro.org, Dave Chinner <david@...morbit.com>,
"Theodore Ts'o" <tytso@....edu>,
linux-kernel <linux-kernel@...r.kernel.org>,
Sage Weil <sage@...hat.com>,
ceph-devel <ceph-devel@...r.kernel.org>
Subject: Re: [PATCH 09/10] fs: ceph: Replace CURRENT_TIME by ktime_get_real_ts()
On Thu, Feb 4, 2016 at 9:30 AM, Arnd Bergmann <arnd@...db.de> wrote:
> On Thursday 04 February 2016 10:00:19 Yan, Zheng wrote:
>> > On Feb 4, 2016, at 05:27, Arnd Bergmann <arnd@...db.de> wrote:
>> {
>> struct ceph_timespec ts;
>> ceph_encode_timespec(&ts, &req->r_stamp);
>> ceph_encode_copy(&p, &ts, sizeof(ts));
>> }
>
> Ok, that does make the behavior consistent on all architectures, but
> leads to a different question:
>
> struct ceph_timespec {
> __le32 tv_sec;
> __le32 tv_nsec;
> } __attribute__ ((packed));
>
> How do you define ceph_timespec, is tv_sec supposed to be signed or unsigned?
>
> It seems that you treat it as signed, meaning you interpret times
> from the server as being in the [1902..2038] range, rather than the
> [1970..2106] range:
>
> static inline void ceph_decode_timespec(struct timespec *ts,
> const struct ceph_timespec *tv)
> {
> ts->tv_sec = (__kernel_time_t)le32_to_cpu(tv->tv_sec);
> ts->tv_nsec = (long)le32_to_cpu(tv->tv_nsec);
> }
>
> Is that intentional and documented? If yes, what is your plan to deal
> with y2038 support?
tv_sec is used as a time_t, so signed. The problem is that ceph_timespec is
not only passed over the wire, but is also stored on disk, part of quite a few
other data structures. The plan is to eventually switch to a 64-bit tv_sec and
tv_nsec, bump the version on all the structures that contain it and add
a cluster-wide feature bit to deal with older clients. We've recently had
a discussion about this, so it may even happen in a not so distant future, but
no promises ;)
Thanks,
Ilya
Powered by blists - more mailing lists