In case you have a Linux OS VM running on KVM paravirtualized environment and found that it doesn't response to the new virtual disk you added (you dont see a new disk on dmesg or nothing coming up from fdisk -l), you will need to make sure virtio_blk driver is in place.
[root@vm ~]# lsmod | grep -i virtio
virtio_net 15665 0
virtio_balloon 4281 0
virtio_blk 5087 3
virtio_pci 6733 0
virtio_ring 7169 4 virtio_net,virtio_balloon,virtio_blk,virtio_pci
virtio 4824 4 virtio_net,virtio_balloon,virtio_blk,virtio_pci
Indeed, you should be seeing /dev/vd* instead of /dev/sd* if you have picked correct OS type during VM initialization. In case you are seeing /dev/sd*, probably you didnt pick correct OS type and that caused virtio associated driver not being loaded.
[root@vm ~]# fdisk -l /dev/vda
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006fe0a
Device Boot Start End Blocks Id System
/dev/vda1 * 3 409 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2 409 41611 20765696 8e Linux LVM
Partition 2 does not end on cylinder boundary.
I am a Linux Administrator in Hong Kong, specialized in RHEL administration as well as IAAS cloud deployment. :-)
2011年12月8日 星期四
2011年12月2日 星期五
Garbage / Missing fonts in RHEL Virt-manager
Virt-Manager is a X application which is capable to manage RHEL's XEN,
QEMU/KVM, LXC service. Similar to other X application, it could be
launched remotely via X11 Forwarding.
[root@server ~]# yum -y install dejavu-lgc-sans-fonts
However, it
happens that garbage fonts will be shown if the host is not
installed with appropriate font package. To fix this, package
dejavu-lgc-sans-fonts have to be installed.
2011年11月21日 星期一
Disable X windows on Ubuntu 10.04
In CentOS / RHEL, people would change option on initdefault from /etc/inittab to control whether X11 should be started. In Ubuntu, it is totally different, there is no such iniitab file on Ubuntu. To stop Ubuntu to start X11 (i.e. behave more or less the same as runlevel 3 in CentOS), one have to modify the /etc/init/gdm.conf and edit the line to disable gdm service on runlevel 2.
Here is the line
Here is the line
# stop on runlevel [0126] ## This means GDM will start on run level 1, 2 and 6.
stop on runlevel [016] ## This means GDM will only start on run level 1 and 6. And Ubuntu's default run level is 2, i.e GDM wont start on default run level.
After that, a system reboot will bring your ubuntu installation to text mode.
stop on runlevel [016] ## This means GDM will only start on run level 1 and 6. And Ubuntu's default run level is 2, i.e GDM wont start on default run level.
After that, a system reboot will bring your ubuntu installation to text mode.
2011年11月20日 星期日
Recover corrupted LVM root partition by fsck
So you have a Linux machine that is installed with LVM and you have assigned root partition to sit on that LVM pool. One day your machine is crashed and the root partition is corrupted. It failed to boot properly and asked you to fill in root password to run fsck. Sadly, you lost the root password and you need a recovery CD (or installation CD) to boot and recover the disk.
Now your machine is booted and you found that your just couldn't run fsck against a LVM partition.
root@test:/# fdisk -l
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00014ef8
Device Boot Start End Blocks Id System
/dev/vda1 * 3 389 194560 83 Linux
/dev/vda2 391 41609 20773888 8e Linux LVM
So rescue CD by default just won't automatically activate the LVM (and its underlaying volumes). What you need to do is to activate the LVM partition. and then run fsck on the volumes.
### Run lvm from rescue CD
bash-4.1# lvm
### This will scan and list the PV
lvm> pvscan
PV /dev/vda2 VG volgroup01 lvm2 [19.78GiB/ 0 free]
Total: 1 [19.78 GiB] / in use: 1 [19.78 GiB] / in no VG: 0 [0 ]
### This will scan and list the VG
lvm> vgscan
Reading all physical volumes. This may take a while...
Found volume group "volgroup01" using metadata type lvm2
### This will list and scan the LV (the meat is here)
lvm> lvscan
inactive '/dev/volgroup01/root' [11.78 GiB] inherit
inactive '/dev/volgroup01/swap' [8.00 GiB] inherit
### And then execute lvchange
bash-4.1# lvchange -ay /dev/volgroup01/root
### So quit the lvm
lvm > exit
Adding dirhash hint to filesystem
### now time to run fsck
bash-4.1# fsck -y /dev/volgroup01/root
Now your machine is booted and you found that your just couldn't run fsck against a LVM partition.
root@test:/# fdisk -l
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00014ef8
Device Boot Start End Blocks Id System
/dev/vda1 * 3 389 194560 83 Linux
/dev/vda2 391 41609 20773888 8e Linux LVM
So rescue CD by default just won't automatically activate the LVM (and its underlaying volumes). What you need to do is to activate the LVM partition. and then run fsck on the volumes.
### Run lvm from rescue CD
bash-4.1# lvm
### This will scan and list the PV
lvm> pvscan
PV /dev/vda2 VG volgroup01 lvm2 [19.78GiB/ 0 free]
Total: 1 [19.78 GiB] / in use: 1 [19.78 GiB] / in no VG: 0 [0 ]
### This will scan and list the VG
lvm> vgscan
Reading all physical volumes. This may take a while...
Found volume group "volgroup01" using metadata type lvm2
### This will list and scan the LV (the meat is here)
lvm> lvscan
inactive '/dev/volgroup01/root' [11.78 GiB] inherit
inactive '/dev/volgroup01/swap' [8.00 GiB] inherit
### And then execute lvchange
bash-4.1# lvchange -ay /dev/volgroup01/root
### So quit the lvm
lvm > exit
Adding dirhash hint to filesystem
### now time to run fsck
bash-4.1# fsck -y /dev/volgroup01/root
2011年11月17日 星期四
Install QEMU-KVM in RHEL / CentOS
Here is the command to install QEMU-KVM packages in RHEL / CentOS,
yum -y install qemu-kvm qemu-kvm-tools libvirt
or these command will install the packages for associated virtualization stuff.
yum -y groupinstall "Virtualization"
yum -y groupinstall "Virtualization Client"
yum -y groupinstall "Virtualization Platform"
yum -y groupinstall "Virtualization Tools"
yum -y install qemu-kvm qemu-kvm-tools libvirt
or these command will install the packages for associated virtualization stuff.
yum -y groupinstall "Virtualization"
yum -y groupinstall "Virtualization Client"
yum -y groupinstall "Virtualization Platform"
yum -y groupinstall "Virtualization Tools"
2011年11月14日 星期一
Bandwidth testing with iperf
To test bandwidth between 2 linux servers, we would use a tools call iperf. Installing iperf from repository is easy, what we need to run is
[root@server ~]# yum -y install server.
Or,
You may want to run apt-get install iperf on debian or ubuntu.
Once we install the package on both machines, we will need to pick one node to run as server
[root@server ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
Once service is running, we could run iperf from the client machine to connect to server.
[root@client ~]# iperf -c 192.168.0.1
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.2 port 37476 connected with 192.168.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.38 GBytes 1.19 Gbits/sec
Now the bandwidth is 1.19Gbit/sec, which is pretty much correct on my VM.
There are some other common options could be used. For e.g. using -d would perform a bi-directional test, i.e testing the bandwidth from source-to-target and then target-to-source. Using -n $some_number would define the size to transfer.
Below example will send approximately 1000Mb of data to test.
[root@client ~]# iperf -n 1000000000 -c 192.168.0.1
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.2 port 39375 connected with 192.168.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 7.8 sec 954 MBytes 1.03 Gbits/sec
[root@server ~]# yum -y install server.
Or,
You may want to run apt-get install iperf on debian or ubuntu.
Once we install the package on both machines, we will need to pick one node to run as server
[root@server ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
Once service is running, we could run iperf from the client machine to connect to server.
[root@client ~]# iperf -c 192.168.0.1
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.2 port 37476 connected with 192.168.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.38 GBytes 1.19 Gbits/sec
Now the bandwidth is 1.19Gbit/sec, which is pretty much correct on my VM.
There are some other common options could be used. For e.g. using -d would perform a bi-directional test, i.e testing the bandwidth from source-to-target and then target-to-source. Using -n $some_number would define the size to transfer.
Below example will send approximately 1000Mb of data to test.
[root@client ~]# iperf -n 1000000000 -c 192.168.0.1
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.2 port 39375 connected with 192.168.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 7.8 sec 954 MBytes 1.03 Gbits/sec
2011年10月3日 星期一
Allow X11-Forwarding to work in Linux
In RHEL / CentOS
[root@dumphost ~]# grep X11Forwarding /etc/ssh/sshd_config
X11Forwarding yes
[root@dumphost ~]# grep AllowTcpForwarding /etc/ssh/sshd_config
AllowTcpForwarding yes
- Install package xorg-x11-xauth
- Have below options set in /etc/ssh/sshd_config and followed by sshd restart
[root@dumphost ~]# grep X11Forwarding /etc/ssh/sshd_config
X11Forwarding yes
[root@dumphost ~]# grep AllowTcpForwarding /etc/ssh/sshd_config
AllowTcpForwarding yes
2011年9月22日 星期四
Setting up NFS v4 in RHEL6 / CentOS6 with user authentication
Prior to NFS v4, NFS authentication are made via hosts authentication.
Starting from NFS v4,
Starting from NFS v4,
To
allow user id mapping to work in NFS v4, one have to edit
/etc/idmapd.conf to match with the DNS domain name of associated hosts.
2011年9月20日 星期二
RHEL / CentOS nic mapping via udev.d
To re-arrange the eth numbering of a nic device, one have to edit /etc/udev/rules.d/70-persistent-net.rules and follow by a system reboot.
[root@dumphost ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:09:ee", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:29:40", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:29:42", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:09:ec", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
[root@dumphost ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:09:ee", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:29:40", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:29:42", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3"
# PCI device 0x14e4:0x1639 (bnx2) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="98:4b:e1:66:09:ec", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
2011年7月31日 星期日
RHEL / CentOS change default settings of nfs client
In CentOS 6 / RHEL6, nfs client is defaulted to mount as version 4. For any reason one would want to mount nfs as version 3, here is the options to do so
[root@dumphost ~]# showmount -e nfshost
Export list for nfshost:
/nfs *
[root@dumphost ~]# mount -o vers=3 nfshost:/nfs /mnt
[root@dumphost ~]# mount
nfshost/nfs on /mnt type nfs (rw,vers=3,addr=10.1.0.2)
In any case you want version 3 be set as default, uncomment the option Defaultvers from /etc/nfsmount.conf and change it from 4 to 3.
[root@dumphost ~]# grep Defaultvers /etc/nfsmount.conf
# Defaultvers=4
Defaultvers=3
[root@dumphost ~]# showmount -e nfshost
Export list for nfshost:
/nfs *
[root@dumphost ~]# mount -o vers=3 nfshost:/nfs /mnt
[root@dumphost ~]# mount
nfshost/nfs on /mnt type nfs (rw,vers=3,addr=10.1.0.2)
In any case you want version 3 be set as default, uncomment the option Defaultvers from /etc/nfsmount.conf and change it from 4 to 3.
[root@dumphost ~]# grep Defaultvers /etc/nfsmount.conf
# Defaultvers=4
Defaultvers=3
訂閱:
文章 (Atom)