Wednesday, 19 August 2009

How To Setup OpenVZ under RHEL / CentOS Linux

This entry is part 1 of 5 in the series RHEL / CentOS OpenVZ Virtualization [1]
RHEL / CentOS OpenVZ Virtualization [1]

How To Setup OpenVZ under RHEL / CentOS Linux

CentOS Linux Install OpenVZ Virtualization Software [2]

How To Create OpenVZ Virtual Machines (VPS) [3]

OpenVZ Iptables: Allow Traffic To Pass Via venet0 To All VPS [4]

OpenVZ Virtual Machine (VPS) Management [5]

Ineed to run more than instance of Linux operating system and different Linux

distributions under CentOS. How do I use OpenVZ virtualization to optimize the

usage of my Dell servers, and create test Linux VPS running Debian, Ubuntu, and

CentOS Linux? How do I deploy OpenVZ under CentOS / RHEL Linux?

OpenVZ virtualization uses the concept of containers to run Linux only instances on the same

hadware. OpenVZ is an operating system-level virtualization technology based. It allows a physical

server to run multiple isolated different Linux distributions operating system instances, known as

containers or Virtual Private Servers (VPSs), or Virtual Environments (VEs). It's similar to FreeBSD

Jails [7] and Solaris Zones.

OpenVZ doesn't have the overhead of a true hypervisor (e.g. XEN or VMware), so it

is very fast and an efficient to run Linux only VPS. All virtual servers will use same

Linux kernel version.

OpenVZ Virtualization and Isolation

It offers strong isolation. This is perfect for running named, mysqld, apache and other services in

each container. Each VPS is a separate entity, and behaves just like a physical server. Each VPS

has:

1. System files (such as /bin, /sbin, /lib etc);

2. Own root users, as well as other users and groups;

3. Process tree;

4. Network (private or public IP;

5. Shared memory, semaphores, messages.

Our Sample Setup (HostNode)

Server: Dual Core CPU with Software RAID1 and 2GB RAM

eth0: Public IP 123.1.2.3

venet0: venet used by OpenVZ to talk with rest of the LAN or Internet.

Hostname: hostnode01.nixcraft.in.

vps.nixcraft.net: 123.1.2.5 - can run any supported Linux distribution.

Host node

The controlling system of container (VPS) environment. The host system has access to all the

hardware resources available, and can control processes both outside of and inside a VPS

environment. One of the important differences of the host system from a VPS is that the

limitations which apply to superuser processes inside a VPS are not enforced for processes of the

host system. Above server is host node.

CT0 or VE0

Another name for host node. In other words, CT0 or VE0 means the server itself. From CT0 / VE0,

you can use vzctl and other tools to manage containers.

VPS or VE (Virtual Environment) or Virtual Machine

A process, user or other software, whose access to resources is restricted by OpenVZ software. VPS

is nothing but an isolated program execution environment, which looks and feels like a separate

physical server. Each VPS has file system, root user, other users, file system, firewall settings,

routing tables and much more. You can setup multiple VPSs within a single physical server.

Different VPSs can run different Linux distributions such as Gentoo, Debian, CentoS, Fedora Linux

etc., but all VPSs operate under the same Linux kernel.

CTID

Each VPS has a unique number called CTID (a ConTainer's IDentifer). CTID is defined by server

admin and it is used to create, start, stop, restart, delete VPS and other administrative jobs related

to your VEs.

VPS Disk Quota

You can restrict VPS disk usage using standard Linux quota tools. For e.g. set vps.nixcraft.net disk

usage to 10Gb only. You can also setup quota using number of inodes [9].

Fair CPU Scheduler

Each VPS gets the time slice from the kernel by taking into account the VPS's CPU priority and limit

settings which can be set by server administrator on host node. This can not be modified by VPS

users include vps root user. The standard Linux scheduler decides which process in the VPS to give

the time slice to, using standard process priorities.

Beancounters - UBC Parameter Units

Each VPS follows set of user beancounters. It is nothing but set of limits and guarantees for each

VPS. Beancounters make sure that no single VPS can abuse any resource which is limited for the

whole host node and thus cause harm to other VPSs. The resources accounted and controlled are

mainly memory and various in-kernel objects such as IPC shared memory segments, network

buffers etc.

Beancounter

value

Usage

lockedpages

The memory not allowed to be swapped out (locked with the mlock() system call),

in pages.

shmpages

The total size of shared memory (including IPC, shared anonymous mappings and

tmpfs objects) allocated by the processes of a particular VPS, in pages.

privvmpages

The size of private (or potentially private) memory allocated by an application. The

memory that is always shared among different applications is not included in this

resource parameter.

numfile The number of files opened by all VPS processes.

numflock The number of file locks created by all VPS processes.

numpty

The number of pseudo-terminals, such as an ssh session, the screen or xterm

applications, etc.

numsiginfo

The number of siginfo structures (essentially, this parameter limits the size of the

signal delivery queue).

dcachesize The total size of dentry and inode structures locked in the memory.

physpages

The total size of RAM used by the VPS processes. This is an accounting-only

parameter currently. It shows the usage of RAM by the VPS. For the memory

pages used by several different VPSs (mappings of shared libraries, for example),

only the corresponding fraction of a page is charged to each VPS. The sum of the

physpages usage for all VPSs corresponds to the total number of pages used in the

system by all the accounted users.

numiptent The number of IP packet filtering entries.

See this article which explains all UBC parameter unit [10].

VPS Templates

VPS templates are nothing but images which are used to create a new VPS. A template is a set of

packages, and a template cache is an archive (tarball) of a chrooted

environment with those packages installed. Each Linux distribution comes as template.

Default Locations

1. /vz - Main directory for OpenVZ.

2. /vz/private - Each VPS is stored here i.e. container's private directories

3. /vz/template/cache - You must download and store each Linux distribution template

here.

4. /etc/vz/ - OpenVZ configuration directory.

5. /etc/vz/vz.conf - Main OpenVZ configuration file.

6. /etc/vz/conf - Softlinked directory for each VPS configuration.

7. Network port - No network ports are opened by OpenVZ kernel.

No comments:

Post a Comment