美国登陆舰列表:LXC (Linux Container) Operating system

来源:百度文库 编辑:偶看新闻 时间:2024/05/07 01:27:31
LXC (Linux Container) Operating system-level virtualization 2011/03/08 13:41

LXC (Linux Container)  On Debian GNU/Linux配置


Network – Bridging^
Assuming you have a freshly installed box with a network up and running the first thing to do is install bridge-utils:
全新安装一个容器,首先你需要安装bridge-utils软件包
------------------------------------
#> aptitude install bridge-utils
------------------------------------
Next thing is setup the bridging. Thats quite easy, assume your /etc/network/interfaces looks like this:
接下来,配置桥接网络。假设你的网卡配置文件/etc/network/interfaces的内容如下所示:
------------------------------------
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp
------------------------------------
All you have to do is change it to this:
你需要将其修改为如下所示
------------------------------------
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp

# Setup bridge
auto br0
iface br0 inet dhcp
   bridge_ports eth0
   bridge_fd 0
------------------------------------
Then restart your network:
重启网卡

#> /etc/init.d/networking restart
------------------------------------
As you can see the eth0 entries are commented out, and a new interface (br0) is introduced. The line bridge_ports eth0 does add the physical eth0 interface to the bridge, bridge_fd 0 sets the “forward delay” to zero, which reduces the waiting in listening and learning states of the bridge..
eth0接口的相关配置被注释,新的接口br0被添加。bridge_ports桥接到物理接口eth0,bridge_fd转发延迟设置为0,这将减少桥接状态下聆听和学习的等待时间。

If you use a static configuration it should look something like this:
如下是br0的静态配置
------------------------------------
auto br0
iface br0 inet static
  bridge_ports eth0
  bridge_fd 0
  address 10.0.0.100
  netmask 255.255.255.0
  gateway 10.100.0.1
  dns-nameservers 10.20.0.2
------------------------------------
More info on that matter can be googled easily.

Install LXC, setup cgroups^
Now it’s time to install lxc. Just install via aptitude:
安装lxc(linux Containers)

#> aptitude install lxc debootstrap febootstrap
Ok, before we can setup the first instance cgroups has to be mounted. Therfore create a directory for mounting them (can be everywhere, i prefer in file system root):

#> mkdir /cgroup
And add the follwowing to your /etc/fstab:

cgroup        /cgroup        cgroup        defaults    0    0
Then mount the cgroups:

#> mount cgroup
One final thing before we can go on is checking the the environment via lxc-checkconfig:

#> lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.32-3-686
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup namespace: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: disabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled
All should be enabled, besides the memory controller, which is sad, but expected. Read more in F.A.Q. about this.

创建第一个linux Containers
mkdir -p /var/lib/lxc/vm0
/usr/lib/lxc/templates/lxc-debian -p /var/lib/lxc/vm0/

#> aptitude install debootstrap
Ok, lets create the first virtual machine. The default (and hard coded) directory for deploying your containers is /var/lib/lxc/.., if you use a SAN or some other hdd, mount it there, don’t try to change lxc to use another location..
创建第一个虚拟机,部署目录是在/var/lib/lxc. 如果使用SAN存储或者是其他的硬盘,将其挂载到改目录下即可,不要尝试将默认目录修改到其他地方。

#> mkdir -p /var/lib/lxc/vm0
#> lxc-debian -p /var/lib/lxc/vm0

This might take some time. Some config menues (eg locales) might popup. After installation is finished, you could start the virtual machine (alias VM or container) right away, but take some time to customize the configuration a little bit.
执行的时间将会很久,待安装完毕,你可以启动vms或者container

Got to /var/lib/lxc/vm0 and edit the file config like so:

lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /srv/lxc/vm0/rootfs
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm

# <<<< ADD THOSE LINES
lxc.utsname = vm0
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
# lxc.network.name = eth0
lxc.network.hwaddr = 00:FF:12:34:56:78
lxc.network.ipv4 = 10.0.0.110/24
Let’s hold for a minute and see what we did here:

lxc.utsname = vm0
The hostname of the container.
lxc.network.type = veth
There are a couple of network types we can work with (man lxc.conf), you should use veth if working with bridges. There is also vlan which you can use if you use linux vlan interfaces and phys, which allows you to hand over a complete physical interface to the container.
lxc.network.flags = up
Says the network to be up at start.
lxc.network.link = br0
Specifies the network bridge to which the virtual interface will be added.
lxc.network.name = eth0
This is the name within the container! Don’t mistake this with the name showing up in the host machine! Don’t set it and it will be eth0 anyway..
lxc.network.hwaddr = 00:FF:12:34:56:78
The hardware address (MAC) the device will use.
lxc.network.ipv4 = 10.0.0.110/24
Network address assigned to the virtual interface. You can provide multiple, one per line (man lxc.conf).
You have to edit /etc/network/interfaces within the container (/var/lib/lxc/vm0/rootfs/etc/network/interfaces) as well!
This being done, let’s start the machine:

#> lxc-start -n vm0 -d
That should have done the trick. You can check whether it is running with this:

#> lxc-info -n vm0
'vm0' is RUNNING
Now login to the container, use the lxc console:

#> lxc-console -n vm0
You should get a login prompt. Just type in “root”, no password (good time to set one).

Thats all. Your first container is up an running. To stop it, just do this:

#> lxc-stop -n vm0
And check again

#> lxc-info -n vm0
'vm0' is STOPPED
Setting up the next container^
Well, you could simply do the same as above, but that would be no fun at all. One of the good (but by far no the major) reason for virtualization is of course the fast setup of virtual machines. If you peeked in the rootfs directory of the created vm0 in /var/lib/lxc/vm0/rootfs, you might have mentioned it’s “only” a simple linux root directory – but no proc, sys or any of those nodes.

The idea is simple: Use tar to compress the vm0 directory and then to create new machines of the archive. Of course this at least one implication: you can now work with “golden images” – or container template if you’d like that better.

#> cd /var/lib/lxc/vm0
#> tar czf ../template.tar.gz *
Now create another container out of this:

#> cd /var/lib/lxc
#> mkdir vm1
#> cd vm1
#> tar xzf ../template.tar.gz
And done. Ok, of course you have to adjust the config file in vm1/config (network.*, utsname, etc) but that’s it.

The rest is up to you imagination. Eg you can have one master template (which you start on a regular basis and bring up to date), templates for each kind of your regular servers (HTTP Server with apache installed and configured or mailserver with postfix setup and so on..) or whatever you want.

What about limits ?^
You might have some experience with other virtualizations like KVM or Xen. When setting up a machine in Xen, you have to configure the amount of memory, could set amount of virtual CPUs and also set the scheduler ratio. That’s (among other things) what cgroups do for LXC.

Before going in details, keep in mind: all cgroup settings are totally dynamical. You can change them all at runtime – but be careful (especially with withdrawing memory from a running instance!).

How to set a cgroup value

All cgroup settings you can set by:

lxc-cgroup -n vm0
echo > /cgroup/vm0/
in config-file: “lxc.cgroup. =
In the examples i will use the config-file notation, cause it’s container independent.

Byte values

For byte values such as memory limits you can use K, M or G

#> echo "400M" > /cgroup/vm0/..
#> echo "1G" > /cgroup/vm0/..
#> echo "500K" > /cgroup/vm0/..
Available parameters

First of, you should have a look at /cgroup/vm0 (if vm0 not running – start it now). You should see something like this:

#> ls -1 /cgroup/vm0/
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpuset.cpu_exclusive
cpuset.cpus
cpuset.mem_exclusive
cpuset.mem_hardwall
cpuset.memory_migrate
cpuset.memory_pressure
cpuset.memory_spread_page
cpuset.memory_spread_slab
cpuset.mems
cpuset.sched_load_balance
cpuset.sched_relax_domain_level
cpu.shares
devices.allow
devices.deny
devices.list
freezer.state
memory.failcnt
memory.force_empty
memory.limit_in_bytes
memory.max_usage_in_bytes
memory.memsw.failcnt
memory.memsw.limit_in_bytes
memory.memsw.max_usage_in_bytes
memory.memsw.usage_in_bytes
memory.soft_limit_in_bytes
memory.stat
memory.swappiness
memory.usage_in_bytes
memory.use_hierarchy
net_cls.classid
notify_on_release
tasks
The lines in italic are only there if your kernel supports the memory controller (see below in the F.A.Q.).

Limit memory and swap^

First the bad news: You cannot limit memory. At least not with the current debian kernel. You have to build your own (or get it from somewhere – i could not find any pre-build for now). How to build your own Kernel from current debian kernel with memory controller enabled is described in the F.A.Q. section.

Assuming you have build the kernel (or get it from somewhere), here is how you limit memory to a container. Again: don’t reduce memory for a running container as long as you are not perfectly sure what you are doing.

Set max memory:

lxc.cgroup.memory.limit_in_bytes = 256M
Set max swap:

lxc.cgroup.memory.memsw.limit_in_bytes = 1G
More on the memory controller can be found here: http://www.mjmwired.net/kernel/Documentation/cgroups/memory.txt

Limit CPU^

There are two attempts for limiting CPU. First of there is the scheduler and then you can assign CPUs directly to a cgroup of a container.

Scheduler

The scheduler works like this: You assign to vm0 the value of 10 and to vm1 the value of 20. This means: in each CPU Second vm1 will get the double amount of CPU cycles as vm0. Per default all values are set to 1024.

lxc.cgroup.cpu.shares = 512
More on the CPU scheduler: http://www.mjmwired.net/kernel/Documentation/scheduler/sched-design-CFS.txt

CPUs

Set the actual CPUs to a container. Assume you have 4 CPUs, then the default for all is values is 0-3 (all CPUs).

# assign first CPU to this container:
lxc.cgroup.cpuset.cpus = 0

# assign the first, the second and the last CPU
lxc.cgroup.cpuset.cpus = 0-1,3

# assign the first and the last CPU
lxc.cgroup.cpuset.cpus = 0,3
Another interesting vaue might be lxc.cgroup.cpuset.sched_relax_domain_level, look it up.

More on CPU sets: http://www.mjmwired.net/kernel/Documentation/cgroups/cpusets.txt

Limit hard disk space^

Well, so far this is not possible with cgroups, but easily achieved with LVM or image files. Simply create a logical device with limited space in LVM or create a limited image file and mount at the container path (before creating the container or move the config and rootfs before).


参考来源或链接:

http://blog.foaa.de/2010/05/lxc-on-debian-squeeze/#install-lxc-setup-cgroups
http://en.wikipedia.org/wiki/LXC (LXC)
http://en.wikipedia.org/wiki/Operating_system-level_virtualization
http://wiki.debian.org/LXC