0%

NFS 服务器搭建记录

系统环境

Centos 6.5
Iptables:off
Selinux:disabled

由于MBR分区表只支持2T的硬盘,/dev/sdb是一块10T的硬盘,使用GPT分区表,需要使用parted工具分区

使用fdisk查看磁盘

ps:这个之后又增加了一块10T的硬盘,这个截图早一点,没提现出来
插入图片01

使用parted工具分区

1
[root@sg020 ~]# parted /dev/sdb 

GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 9896GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

(parted) mkpart primary 0 10T
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 9896GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 9896GB 9896GB primary

(parted) quit
Information: You may need to update /etc/fstab.
插入图片01
(parted) set 1 lvm on
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 9896GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 9896GB 9896GB ext4 primary lvm
(parted) quit
Information: You may need to update /etc/fstab.
插入图片01

1
[root@sg020 ~]# parted /dev/sdc

GNU Parted 2.1
Using /dev/sdc
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) print
Error: /dev/sdc: unrecognised disk label
(parted) mklabel gpt
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdc: 11.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

(parted) mkpart primary 0 10995G
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdc: 11.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 11.0TB 11.0TB primary

(parted) set 1 lvm on
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdc: 11.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 17.4kB 11.0TB 11.0TB primary lvm

(parted) quit
Information: You may need to update /etc/fstab.
插入图片01

lvm 划分

创建pv

1
[root@sg020 ~]# pvcreate /dev/sdb1 /dev/sdc1 

dev_is_mpath: failed to get device for 8:17
Physical volume “/dev/sdb1” successfully created
dev_is_mpath: failed to get device for 8:33
Physical volume “/dev/sdc1” successfully created

1
[root@sg020 ~]# pvdisplay 

— Physical volume —
PV Name /dev/sda2
VG Name vg_sg020
PV Size 193.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 49538
Free PE 0
Allocated PE 49538
PV UUID BF6Ikw-eYtl-HfNM-Nnmt-WLK4-xb8O-KAnZtB

“/dev/sdb1” is a new physical volume of “9.00 TiB”
— NEW Physical volume —
PV Name /dev/sdb1
VG Name
PV Size 9.00 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID aPPVxz-oeWE-lP9l-AeZU-p699-pffd-O7T3dW

“/dev/sdc1” is a new physical volume of “10.00 TiB”
— NEW Physical volume —
PV Name /dev/sdc1
VG Name
PV Size 10.00 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID MWbqIl-WI66-JCsc-x0Re-UqjL-HgXB-Vw1jge
插入图片01

创建vg

1
[root@sg020 ~]# vgcreate vg_fuz /dev/sdb1 /dev/sdc1 

Volume group “vg_fuz” successfully created

1
[root@sg020 ~]# vgdisplay 

— Volume group —
VG Name vg_fuz
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.00 TiB
PE Size 4.00 MiB
Total PE 4980734
Alloc PE / Size 0 / 0
Free PE / Size 4980734 / 19.00 TiB
VG UUID ruDcio-mmt5-z6YD-7HS0-fbst-GIlT-SwmvcF

— Volume group —
VG Name vg_sg020
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 193.51 GiB
PE Size 4.00 MiB
Total PE 49538
Alloc PE / Size 49538 / 193.51 GiB
Free PE / Size 0 / 0
VG UUID 2E6Hdt-xWhi-DyLX-pDva-3WTw-fDNx-Qytoqg
插入图片01

创建lv

1
[root@sg020 ~]# lvcreate -n lv_fuz -l 4980734 vg_fuz

Logical volume “lv_fuz” created

1
[root@sg020 ~]# lvdisplay 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
--- Logical volume ---
LV Path /dev/vg_fuz/lv_fuz
LV Name lv_fuz
VG Name vg_fuz
LV UUID JTUEVa-9JPb-0VaD-5JH8-fwov-MuUS-gp1Gdr
LV Write Access read/write
LV Creation host, time sg020, 2017-06-14 09:27:14 +0800
LV Status available
\# open 0
LV Size 19.00 TiB
Current LE 4980734
Segments 2
Allocation inherit
Read ahead sectors auto
\- currently set to 256
Block device 253:3

--- Logical volume ---
LV Path /dev/vg_sg020/lv_root
LV Name lv_root
VG Name vg_sg020
LV UUID 5yHVBf-Qt3u-fzb2-gwNL-opy3-0ewQ-0ypYsc
LV Write Access read/write
LV Creation host, time livedvd.centos, 2017-06-09 15:49:11 +0800
LV Status available
\# open 1
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
\- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/vg_sg020/lv_home
LV Name lv_home
VG Name vg_sg020
LV UUID d1bsIq-ZeVr-mxk1-Mu2U-7H1R-wXMX-rWZ9hj
LV Write Access read/write
LV Creation host, time livedvd.centos, 2017-06-09 15:49:19 +0800
LV Status available
\# open 1
LV Size 124.11 GiB
Current LE 31772
Segments 1
Allocation inherit
Read ahead sectors auto
\- currently set to 256
Block device 253:2

--- Logical volume ---
LV Path /dev/vg_sg020/lv_swap
LV Name lv_swap
VG Name vg_sg020
LV UUID wGW4BB-KB5g-UUZ8-FwLQ-pTYy-Eoc2-ji102r
LV Write Access read/write
LV Creation host, time livedvd.centos, 2017-06-09 15:49:37 +0800
LV Status available
\# open 1
LV Size 19.40 GiB
Current LE 4966
Segments 1
Allocation inherit
Read ahead sectors auto
\- currently set to 256
Block device 253:1

插入图片01

格式化分区

1
2
3
4
[root@sg020 ~]# mkfs.ext4 /dev/vg_fuz/lv_fuz
mke2fs 1.41.12 (17-May-2010)
mkfs.ext4: Size of device /dev/vg_fuz/lv_fuz too big to be expressed in 32 bits
using a blocksize of 4096.

ext4格式的分区,不支持大于16TB的硬盘分区,我们使用xfs格式

1
2
[root@sg020 ~]# yum install xfsprogs
[root@sg020 ~]# mkfs.xfs /dev/vg_fuz/lv_fuz

插入图片01

查找分区UUID

1
2
[root@sg020 ~]# blkid /dev/vg_fuz/lv_fuz 
/dev/vg_fuz/lv_fuz: UUID="3c2731e9-eac4-4e05-b103-be061e4c009c" TYPE="xfs"

添加到fstab

1
2
[root@sg020 ~]# mkdir /data
[root@sg020 ~]# echo 'UUID=3c2731e9-eac4-4e05-b103-be061e4c009c /data xfs defaults 0 0' >> /etc/fstab

挂载分区

1
2
[root@sg020 ~]# mount –a
[root@sg020 ~]# mount |column –t

插入图片01

查看安装nfs软件

1
2
[root@sg020 ~]# yum -y install nfs-utils rpcbind
[root@sg020 ~]# rpm -qa |grep nfs

nfs-utils-1.2.3-39.el6.x86_64
nfs-utils-lib-1.1.5-6.el6.x86_64
nfs4-acl-tools-0.3.3-6.el6.x86_6

1
[root@sg020 ~]# rpm -qa |grep rpcbind

rpcbind-0.2.0-11.el6.x86_64

1
2
/etc/exports文件内容格式:
<输出目录> [客户端1 选项(访问权限,用户映射,其他)] [客户端2 选项(访问权限,用户映射,其他)]
1
2
[root@sg020 ~]# cat /etc/exports 
/data 218.193.126.128/25(insecure,rw,async,no_root_squash) 59.77.252.0/26(insecure,rw,async,no_root_squash)
1
2
3
4
5
6
NFS的常用目录
/etc/exports NFS服务的主要配置文件
/usr/sbin/exportfs NFS服务的管理命令
/usr/sbin/showmount 客户端的查看命令
/var/lib/nfs/etab 记录NFS分享出来的目录的完整权限设定值
/var/lib/nfs/xtab 记录曾经登录过的客户端信息

启动服务

1
2
[root@sg020 ~]# /etc/init.d/rpcbind start
Starting rpcbind: [ OK ]
1
2
3
4
5
6
[root@sg020 ~]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]
1
2
[root@sg020 ~]# chkconfig --level 35 rpcbind on
[root@sg020 ~]# chkconfig --level 35 nfs on

重新共享所有目录并输出详细信息

1
2
3
[root@sg020 ~]# exportfs -rv 
exporting 218.193.126.128/25:/data
exporting 59.77.252.0/26:/data

卸载所有共享目

1
[root@sg020 ~]# exportfs -au

客户端挂载

1
2
3
4
[root@sg011 ~]#  mount -t nfs sg020:/data/ /data
[root@sg011 ~]# echo “sg020:/data /data nfs nolock 0 0” >> /etc/fstab
[root@sg011~]# mount –a
[root@sg011 ~]# mount |column –t

插入图片01

user & group 是nobody的问题

由于centos6以后采用的是nfs v4,所以在挂载后,服务端和客户端看到的信息不一致,客户端看到文件的user和group都是nobody 或nfsnobody,以下是解决办法

服务端

修改配置文件

1
2
# vi /etc/idmapd.conf
Domain = hpcnfs

重启服务

1
2
/etc/init.d/rpcidmapd restart 
chkconfig --level 35 rpcidmapd on

客户端

修改配置文件

1
2
# vi /etc/idmapd.conf
Domain = hpcnfs

清除缓存

1
nfsidmap -c

重启服务

1
2
/etc/init.d/rpcidmapd restart 
chkconfig --level 35 rpcidmapd on

问题解决

重启之后,如果客户端提示“Stale file handle”,三种解决方法:

1
2
3
1. umount /data;mount -a
2. fuser -km /data; mount -a
3. umount -lv /data; mount -a

lvm扩容,“resize2fs: Device or resource busy while trying to open”,解决方法如下:

其实做lvm不需要分区,直接将整个盘做成pv;扩容的vg时候,可以更懒一点,pv都不用做,直接vgextend;

1
vgextend vg_fuz /dev/sdc 	

在扩容lv的时候,更加方便的做法是使用pe扩容,而不是容量(-L是容量,-l是PE,可以通过vgdisplay查看剩余);

1
lvextend -l +3000 /dev/vg_fuz/lv_fuz 	

fdisk可以看到到容量已经变化了,但是df还看不出来,我们可以使用resize2fs命令重设;

1
resize2fs /dev/vg_fuz/lv_fuz  

xfs 文件系统 resize2fs已经不适用了,需要用到一个新的命令xfs_growfs;

1
xfs_growfs /dev/vg_fuz/lv_fuz 

插入图片01
因为整个环境是内网环境,需要拨vpn才能访问,我没有配置iptables,nfs的iptables配置也比较麻烦,回头研究一下;
另,今天遇到一个很有意思的情况(其实就是自己学艺不精),由于Nfs给两个网段提供服务,牵扯到一个跨网段的问题;我看了下服务器上的配置的网络情况,根据ip地址所在的网段(218.193.126.x和59.77.252.x)和子网掩码(25和26),配置了/data 218.193.126.0/25(insecure,rw,async,no_all_squash) 59.77.252.0/26(insecure,rw,async,no_all_squash),之后的情况就很诡异,59这个网络是可以正常访问的,218这个网络提示没有权限(mount.nfs: access denied by server while mounting sg020:/data/),搞得我很纳闷,配置到精确的ip地址就没有问题;后来经群里高手点播,网络和掩码的计算(218.193.126.0/25意思应该是218.193.126.1~218.193.126.127),我的服务器ip是不在范围内的,回头需要补习一下网络基础。