lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <PUZPR04MB63162C265F7C0CB87E6457988187A@PUZPR04MB6316.apcprd04.prod.outlook.com>
Date: Tue, 6 Jan 2026 07:32:08 +0000
From: "Yuezhang.Mo@...y.com" <Yuezhang.Mo@...y.com>
To: Chi Zhiling <chizhiling@....com>
CC: "brauner@...nel.org" <brauner@...nel.org>,
        "chizhiling@...inos.cn"
	<chizhiling@...inos.cn>,
        "jack@...e.cz" <jack@...e.cz>,
        "linkinjeon@...nel.org" <linkinjeon@...nel.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "sj1557.seo@...sung.com" <sj1557.seo@...sung.com>,
        "viro@...iv.linux.org.uk"
	<viro@...iv.linux.org.uk>,
        "willy@...radead.org" <willy@...radead.org>
Subject: Re: [PATCH v1 9/9] exfat: support multi-cluster for exfat_get_cluster

> On 1/4/26 15:56, Yuezhang.Mo@...y.com wrote:
>>> On 12/30/25 17:06, Yuezhang.Mo@...y.com wrote:
>>>>> +     /*
>>>>> +      * Return on cache hit to keep the code simple.
>>>>> +      */
>>>>> +     if (fclus == cluster) {
>>>>> +             *count = cid.fcluster + cid.nr_contig - fclus + 1;
>>>>>                 return 0;
>>>>
>>>> If 'cid.fcluster + cid.nr_contig - fclus + 1 < *count', how about continuing to collect clusters?
>>>> The following clusters may be continuous.
>>>
>>> I'm glad you noticed this detail. It is necessary to explain this and
>>> update it in the code comments.
>>>
>>> The main reason why I didn't continue the collection was that the
>>> subsequent clusters might also exist in the cache. This requires us to
>>> search the cache again to confirm this, and this action might introduce
>>> additional performance overhead.
>>>
>>> I think we can continue to collect, but we need to check the cache
>>> before doing so.
>>>
>>
>> So we also need to check the cache in the following, right?
>
> Uh, I don't think it's necessary in here, because these clusters won't
> exist in the cache.
> 
> In the cache_lru, all exfat_cache start from non-continuous clusters.
> This is because exfat_get_cluster adds consecutive clusters to the cache
> from left to right, which means that the left side of all caches is
> non-continuous.
> 
> For instance, if a file contains two extents,  [0,30] and [31,60], then
> exfat_cache must start at either 0 or 31, right?
> 
> When we have found a cache is [31, 45], then there won't be [41, 60] in
> the cache_lru.
> 
> So when we already get some head clusters of a continuous extent, the
> tail cluster will definitely not be present in the cache.
> 

Your understanding is correct, so if the cache only includes a part of the
requested clusters, we can continue to collect subsequent clusters.

> 
> ---
> 
> Here are some modifications regarding this patch (which may be reflected
> in the V2 version). Do you have any thoughts or suggestions on this?
> 

Although these modifications make exfat_cache_lookup check the entire
cache every time, I think the modifications are reasonable since the cache
(EXFAT_MAX_CACHE is 16) is small.

> 
> diff --git a/fs/exfat/cache.c b/fs/exfat/cache.c
> index 1ec531859944..8ff416beea3c 100644
> --- a/fs/exfat/cache.c
> +++ b/fs/exfat/cache.c
> @@ -80,6 +80,10 @@ static inline void exfat_cache_update_lru(struct
> inode *inode,
>                  list_move(&cache->cache_list, &ei->cache_lru);
>   }
> 
> +/*
> + * Return fcluster of the cache which behind fclus, or
> + * EXFAT_EOF_CLUSTER if no cache in there.
> + */
>   static bool exfat_cache_lookup(struct inode *inode,
>                  unsigned int fclus, struct exfat_cache_id *cid,
>                  unsigned int *cached_fclus, unsigned int *cached_dclus)
> @@ -87,6 +91,7 @@ static bool exfat_cache_lookup(struct inode *inode,
>          struct exfat_inode_info *ei = EXFAT_I(inode);
>          static struct exfat_cache nohit = { .fcluster = 0, };
>          struct exfat_cache *hit = &nohit, *p;
> +       unsigned int next = EXFAT_EOF_CLUSTER;
>          unsigned int offset;
> 
>          spin_lock(&ei->cache_lru_lock);
> @@ -98,8 +103,9 @@ static bool exfat_cache_lookup(struct inode *inode,
>                                  offset = hit->nr_contig;
>                          } else {
>                                  offset = fclus - hit->fcluster;
> -                               break;
>                          }
> +               } else if (p->fcluster > fclus && p->fcluster < next) {
> +                       next = p->fcluster;
>                  }
>          }
>          if (hit != &nohit) {
> @@ -114,7 +120,7 @@ static bool exfat_cache_lookup(struct inode *inode,
>          }
>          spin_unlock(&ei->cache_lru_lock);
> 
> -       return hit != &nohit;
> +       return next;
>   }
> 
>   static struct exfat_cache *exfat_cache_merge(struct inode *inode,
> @@ -243,7 +249,7 @@ int exfat_get_cluster(struct inode *inode, unsigned
> int cluster,
>          struct exfat_inode_info *ei = EXFAT_I(inode);
>          struct buffer_head *bh = NULL;
>          struct exfat_cache_id cid;
> -       unsigned int content, fclus;
> +       unsigned int content, fclus, next;
>          unsigned int end = cluster + *count - 1;
> 
>          if (ei->start_clu == EXFAT_FREE_CLUSTER) {
> @@ -272,14 +278,15 @@ int exfat_get_cluster(struct inode *inode,
> unsigned int cluster,
>                  return 0;
> 
>          cache_init(&cid, fclus, *dclus);
> -       exfat_cache_lookup(inode, cluster, &cid, &fclus, dclus);
> +       next = exfat_cache_lookup(inode, cluster, &cid, &fclus, dclus);
> 
> -       /*
> -        * Return on cache hit to keep the code simple.
> -        */
>          if (fclus == cluster) {
> -               *count = cid.fcluster + cid.nr_contig - fclus + 1;

If the cache only includes a part of the requested clusters and subsequent
clusters are not contiguous. '*count' is needed to set like above.

> -               return 0;
> +               /* The cache includes all cluster requested */
> +               if (cid.fcluster + cid.nr_contig >= end)
> +                       return 0;
> +               /* No cache hole behind this cache */
> +               if (next == cid.fcluster + cid.nr_contig + 1)
> +                       return 0;
>          }
> 
>          /*
> 
> 
> Thanks,
> 
>>
>> ```
>>          /*
>>           * Collect the remaining clusters of this contiguous extent.
>>           */
>>          if (*dclus != EXFAT_EOF_CLUSTER) {
>>                  unsigned int clu = *dclus;
>>
>>                  /*
>>                   * Now the cid cache contains the first cluster requested,
>>                   * Advance the fclus to the last cluster of contiguous
>>                   * extent, then update the count and cid cache accordingly.
>>                   */
>>                  while (fclus < end) {
>>                          if (exfat_ent_get(sb, clu, &content, &bh))
>>                                  goto err;
>>                          if (++clu != content) {
>>                                  /* TODO: read ahead if content valid */
>>                                  break;
>>                          }
>>                          fclus++;
>>                  }
>>                  cid.nr_contig = fclus - cid.fcluster;
>>                  *count = fclus - cluster + 1;
>> ```
>>
>>>>>
>>>>> +             while (fclus < end) {
>>>>> +                     if (exfat_ent_get(sb, clu, &content, &bh))
>>>>> +                             goto err;
>>>>> +                     if (++clu != content) {
>>>>> +                             /* TODO: read ahead if content valid */
>>>>> +                             break;
>>>>
>>>> The next cluster index has been read and will definitely be used.
>>>> How about add it to the cache?
>>>
>>> Good idea!
>>> will add it in v2,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ