I’m running a fully updated Ubuntu 9.04 «Jaunty» i686 server. I have an single XFS volume in an LVM group called /dev/mapper/vg0-bigthree.
If I boot to single user mode and ensure that the volume is unmounted, I still get the following every time I try to run xfs_check:
$ sudo xfs_check /dev/mapper/vg0-bigthree
xfs_check: /dev/mapper/vg0-bigthree contains a mounted and writable filesystem
fatal error -- couldn't initialize XFS library
Just to be thorough, I started by trying to run
$ sudo fsck.xfs /dev/mapper/vg0-bigthree
If you wish to check the consistency of an XFS filesystem or
repair a damaged filesystem, see xfs_check(8) and xfs_repair(8).
before turning to xfs_check.
Also, I can confirm that there is no occurrence in the output of mount or in /etc/mtab of the volume’s device or mount point.
asked Aug 12, 2010 at 18:28
2
This is how I got around this on my system. I saw the same issues as you when trying to run xfs_check. Clearly the fs is un-mounted. It appears as though either autofs or nfs was still holding onto the filesystem and once they were stopped the check ran.
[root@openfiler ~]# xfs_check /dev/backup2/backup2
xfs_check: /dev/backup2/backup2 contains a mounted and writable filesystem
fatal error -- couldn't initialize XFS library
[root@openfiler ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc2 35775912 804200 33125044 3% /
/dev/sdc1 101086 14410 81457 16% /boot
tmpfs 512440 0 512440 0% /dev/shm
[root@openfiler ~]# cat /etc/mtab
/dev/sdc2 / ext3 rw 0 0
/proc /proc proc rw 0 0
/sys /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sdc1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/rpc_pipefs rpc_pipefs rw 0 0
automount(pid2644) /misc autofs rw,fd=4,pgrp=2644,minproto=2,maxproto=4 0 0
automount(pid2681) /net autofs rw,fd=4,pgrp=2681,minproto=2,maxproto=4 0 0
nfsd /proc/fs/nfsd nfsd rw 0 0
[root@openfiler ~]# service autofs stop
Stopping automount: [ OK ]
[root@openfiler ~]# cat /etc/mtab
/dev/sdc2 / ext3 rw 0 0
/proc /proc proc rw 0 0
/sys /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sdc1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/rpc_pipefs rpc_pipefs rw 0 0
nfsd /proc/fs/nfsd nfsd rw 0 0
[root@openfiler ~]# service nfs stop
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS services: [ OK ]
[root@openfiler ~]# cat /etc/mtab
/dev/sdc2 / ext3 rw 0 0
/proc /proc proc rw 0 0
/sys /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
/dev/sdc1 /boot ext3 rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/rpc_pipefs rpc_pipefs rw 0 0
nfsd /proc/fs/nfsd nfsd rw 0 0
[root@openfiler ~]# xfs_check /dev/backup2/backup2
answered Nov 1, 2011 at 2:16
KevinKevin
562 bronze badges
0
Try strace -fF -o /tmp/debugfile sudo xfs_check /dev/mapper/vg0-bigthree and then grep open /tmp/debugfile.* to see what actually happens behind the scenes before xfs_check decides to throw out that error.
answered Aug 12, 2010 at 18:50
2
null
XFS — высокопроизводительная 64-битная журналируемая файловая система, созданная компанией Silicon Graphics, поддержка которой включена в ядро Linux начиная с версии 2.4.25. XFS активно продвигается в мире Linux, и, например, некоторые дистрибутивы, такие как RedHat 7, CentOS 7 и Oracle Enterprise Linux 7 используют её по умолчанию.
У одного заказчика на одном сервере закончилось место на корневой файловой системе. Анализ занятого пространства временными файлами и журналами показал, что проблему надо искать где-то еще. Команда df -h / говорила, что занято 18G из 18G и свободно около 20K, но вывод команды du -h / показал, что занято около 5.5G. В традиционных файловых системах на unix-like ОС такое поведение может быть вызвано двумя причинами:
- Повреждённая файловая система. Для исправления необходима перезагрузка для проверки консистентности файловой системы;
- Были удалены файлы, открытые каким-то процессом. Не зная специфики используемого на узле ПО и выполненных ранее действий самым простым способом исправления также является перезагрузка — ОС высвободит место после завершения процессов.
Как не сложно заметить, для простого устранения обеих возможных причин необходима перезагрузка, и, соответственно, было принято решение о её выполнении на узле. Для того, чтобы в RedHat-подобных (и не только) дистрибутивах выполнить проверку файловых систем при перезагрузке можно создать в корневом каталоге файл с именем /forcefsck или воспользоваться параметрами сервиса systemd-fsck.
После создания файла /forcefsck и перезагрузки системы в /var/log/messages были найдены записи:
Dec XX 11:26:44 hostname systemd: Starting File System Check on /dev/mapper/centos-root... Dec XX 11:26:44 hostname systemd-fsck: /sbin/fsck.xfs: XFS file system. Dec XX 11:26:44 hostname systemd: Started File System Check on /dev/mapper/centos-root.
Из чего было сделано предположение, что файловая система была проверена. Но свободное место так и не появилось. Внезапно выяснилось, что XFS на столько Хорошая Файловая Система, что она считает, что она вся такая журналируемая и проверка консистентности при загрузке ей не нужна, а /sbin/fsck.xfs представляет собой гениальный скрипт:
#!/bin/sh -f
#
# Copyright (c) 2006 Silicon Graphics, Inc. All Rights Reserved.
#
AUTO=false
while getopts ":aApy" c
do
case $c in
a|A|p|y) AUTO=true;;
esac
done
eval DEV=${$#}
if [ ! -e $DEV ]; then
echo "$0: $DEV does not exist"
exit 8
fi
if $AUTO; then
echo "$0: XFS file system."
else
echo "If you wish to check the consistency of an XFS filesystem or"
echo "repair a damaged filesystem, see xfs_repair(8)."
fi
exit 0
И сообщение XFS file system означало, что на самом деле никаких проверок не выполнялось, и надо запускать xfs_repair руками. Выполнение проверки файловой системы на загруженной системе ожидаемо невозможно:
xfs_repair: /dev/mapper/centos-root contains a mounted filesystem xfs_repair: /dev/mapper/centos-root contains a mounted and writable filesystem fatal error -- couldn't initialize XFS library
Но и перезагрузив систему в single user и перемонтировав корневую файловую систему в режиме только чтение получаем сообщение:
xfs_repair: /dev/mapper/centos-root contains a mounted filesystem Unmount or use the dangerous (-d) option to repair a read-only mounted filesystem fatal error -- couldn't initialize XFS library
И только выполнив команду xfs_repair -d /dev/mapper/centos-root XFS наконец-то проверяет консистентность файловой системы, находит какое-то количество потерянных файлов, помещает их в lost+found и просит срочно перезагрузиться. После перезагрузки и очистки lost+found свободное место наконец-то появляется.
При этом стоит отметить, что проблемы с потерей файлов на XFS при некорректном завершении работы ОС — не самая редкая проблема. И почему после такого завершения не выполняется полноценная проверка консистентности файловой не очень понятно.
In my desktop I installed redhat 7 and after reboot it went to maintenance mode, After entering the password it is not allowing to do a file system scan for root and every time it boots enters to maintenance mode and not allowing to run xfs_repair command which is showing error as it is mounted.
# xfs_repair /dev/mapper/rhel-root
xfs_repair: /dev/mapper/rhel-root contains a mounted and writable filesystem.
fatal error -- couldn't initialize XFS library.
I tried to execute same command on other file systems which are not mounted and it is working fine.
Please help me.
asked Nov 20, 2014 at 12:19
You can at least find out what’s wrong by running:
xfs_repair -n /dev/mapper/rhel-root
-n runs xfs_repair in no-modify mode.
If it’s complaining about it being mounted and writeable, you might want to try re-mounting it read-only (mount -r -o remount <device>), but this will probably just come back with ‘/dev/mapper/rhel-root is busy’.
I’d go with booting from a different medium and running xfs_check from there.
answered Nov 27, 2014 at 6:48
1
Boot from some live medium and perform the xfs_repair from there.
answered Nov 20, 2014 at 12:23
1
Boot into Emergency mode in which root-fs will be mounted as read-only. For this, add
systemd.unit=emergency.target
to the kernel parameters in GRUB when booting. After this you would be able to run:
xfs_repair -d /dev/mapper/your-root-fs
answered Oct 17, 2018 at 10:12
Learn xfs file system commands to create, grow, repair xfs file system along with command examples.

In our other article, we walked you through what is xfs, features of xfs, etc. In this article, we will see some frequently used xfs administrative commands. We will see how to create xfs filesystem, how to grow xfs filesystem, how to repair the xfs file system, and check xfs filesystem along with command examples.
Create XFS filesystem
mkfs.xfs command is used to create xfs filesystem. Without any special switches, command output looks like one below –
root@kerneltalks # mkfs.xfs /dev/xvdf
meta-data=/dev/xvdf isize=512 agcount=4, agsize=1310720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=5242880, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Note: Once the XFS filesystem is created it can not be reduced. It can only be extended to a bigger size.
Resize XFS file system
In XFS, you can only extend the file system and can not reduce it. To grow XFS file system use xfs_growfs. You need to specify a new size of mount point along with -D switch. -D takes argument number as file system blocks. If you don’t supply -D switch, xfs_growfs will grow the filesystem to the maximum available limit on that device.
root@kerneltalks # xfs_growfs /dev/xvdf -D 256
meta-data=/dev/xvdf isize=512 agcount=4, agsize=720896 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=2883584, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size 256 too small, old size is 2883584
In the above output, observe the last line. Since I supplied a new size smaller than the existing size, xfs_growfs didn’t change the filesystem. This shows you can not reduce the XFS file system. You can only extend it.
root@kerneltalks # xfs_growfs /dev/xvdf -D 2883840
meta-data=/dev/xvdf isize=512 agcount=4, agsize=720896 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=2883584, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2883584 to 2883840
Now, I supplied new size 1 GB extra and it successfully grew the file system.
1 GB blocks calculation :
The current filesystem has bsize=4096 i.e. block size of 4MB. We need 1 GB i.e. 256 blocks. So add 256 in a current number of blocks i.e. 2883584 which gives you 2883840. So I used 2883840 as an argument to -D switch.
Repair XFS file system
File system consistency check and repair of XFS can be performed using xfs_repair command. You can run the command with -n switch so that it will not modify anything on the filesystem. It will only scans and reports which modifications to be done. If you are running it without -n switch, it will modify the file system wherever necessary to make it clean.
Please note that you need to un-mount the XFS filesystem before you can run checks on it. Otherwise, you will see the below error.
root@kerneltalks # xfs_repair -n /dev/xvdf xfs_repair: /dev/xvdf contains a mounted filesystem xfs_repair: /dev/xvdf contains a mounted and writable filesystem fatal error -- couldn't initialize XFS library
Once successfully un-mounting file system you can run command on it.
root@kerneltalks # xfs_repair -n /dev/xvdf
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
In the above output you can observe, in each phase command shows possible modification which can be done to make the file system healthy. If you want the command to do that modification during the scan then run the command without any switch.
root@kerneltalks # xfs_repair /dev/xvdf
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
In the above output, you can observer xfs_repair command is executing possible filesystem modification as well to make it healthy.
Check XFS version and details
Checking the xfs file system requires it to un-mount. Run xfs_db command on its device path and once you entered xfs_db prompt, run version command.
xfs_db command normally used for examining the XFS file system. version command used to enable features in the file system. Without any argument, the current version and feature bits are printed
root@kerneltalks # xfs_db /dev/xvdf1 xfs_db: /dev/xvdf1 contains a mounted filesystem fatal error -- couldn't initialize XFS library root@kerneltalks # umount /shrikant root@kerneltalks # xfs_db /dev/xvdf1 xfs_db> version versionnum [0xb4a5+0x18a] = V5,NLINK,DIRV2,ALIGN,LOGV2,EXTFLG,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE xfs_db> quit
To view details of the XFS file system like block size and number of blocks which helps you in calculating new block number for growing XFS file system, use xfs_info without any switch.
root@kerneltalks # xfs_info /shrikant
meta-data=/dev/xvdf isize=512 agcount=5, agsize=720896 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=2883840, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
It displays all details as it shows while creating XFS file system
There are other XFS file system management commands which alter and manages its metadata. We will cover them in another article.
