Is there a reasonble way to increase the file name limitation of 255 bytes?

It seems that the file name length limitation is 255 "characters" on Windows (NTFS), but 255 "bytes" on Linux (ext4, BTRFS). I am not sure what text encoding those file systems use for file names, but if it is UTF-8, one Asian character, such as Japanese, could take 3 or more bytes. So, for English, 255 bytes means 255 characters, but for Japanese, 255 bytes could mean a lot less characters, and this limitation could be problematic in some cases.

Other than practically impossible method for a general user like modifying Linux file system/kernel etc, is there any practical way to increase the limitation so that I could have guaranteed 255-character file name capacity for Asian characters on Linux?

Asked By: Damn Vegetables


TL/DR: there’s a way but unless you’re a kernel hacker/know C very well, there’s no way.

Detailed answer:

While glibc defines #define FILENAME_MAX 4096 on Linux which limits path length to 4096 bytes there’s a hard 255 bytes limit in Linux VFS which all filesystems must conform to. The said limit is defined in /usr/include/linux/limits.h:

/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */

#define NR_OPEN         1024

#define NGROUPS_MAX    65536    /* supplemental group IDs are available */
#define ARG_MAX       131072    /* # bytes of args + environ for exec() */
#define LINK_MAX         127    /* # links a file may have */
#define MAX_CANON        255    /* size of the canonical input queue */
#define MAX_INPUT        255    /* size of the type-ahead buffer */
#define NAME_MAX         255    /* # chars in a file name */
#define PATH_MAX        4096    /* # chars in a path name including nul */
#define PIPE_BUF        4096    /* # bytes in atomic write to a pipe */
#define XATTR_NAME_MAX   255    /* # chars in an extended attribute name */
#define XATTR_SIZE_MAX 65536    /* size of an extended attribute value (64k) */
#define XATTR_LIST_MAX 65536    /* size of extended attribute namelist (64k) */

#define RTSIG_MAX     32


And here’s a piece of code from linux/fs/libfs.c which will throw an error in case you dare use a filename length longer than 255 chars:

 * Lookup the data. This is trivial - if the dentry didn't already
 * exist, we know it is negative.  Set d_op to delete negative dentries.
struct dentry *simple_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
    if (dentry->d_name.len > NAME_MAX)
        return ERR_PTR(-ENAMETOOLONG);
    if (!dentry->d_sb->s_d_op)
        d_set_d_op(dentry, &simple_dentry_operations);
    d_add(dentry, NULL);
    return NULL;

So, not only you’ll have to redefine this limit, you’ll have to rewrite filesystems source code (and disk structure) to be able to use it. And then outside of your device, you won’t be able to mount such a filesystem unless you use its extensions to store very long filenames (like FAT32 does).

Answered By: Artem S. Tashkinov

In many cases, the 255-byte limit is baked into the on-disk format; see for example Ext4 which only provides 8 bits to encode the name length. Thus, even if you could work around the kernel APIs’ limits, you wouldn’t be able to store anything longer than 255 bytes anyway.

You would therefore have to come up with a name storage extension (for example, VFAT-style using multiple directory entries to store names which are too long, or 4DOS-style using a separate file to store the long names), and then you’re effectively creating a new file system…

Answered By: Stephen Kitt
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.