ABHIONLINUX
Site useful for linux administration and web hosting

2010/03/18

Load in the server

Load :  Measure of the amount of work done by a computer systems.
Load average : Average system load over a period of time.


An idle computer has a load number of 0 and each process using or waiting for CPU adds to the load number by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different results if many processes remain blocked in I/O  due to a busy or stalled I/O system. This, for example, includes processes blocking due to an NFS server failure or to slow media (e.g., USB 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea on how long users have to wait).

Systems calculate the load average as the exponentially damped/weighted moving average of the load number. The three values of load average refer to the past one, five, and fifteen minutes of system operation.

For single-CPU systems that are CPU-bound, one can think of load average as a percentage of system utilization during the respective time period. For systems with multiple CPUs, one must divide the number by the number of processors in order to get a comparable percentage.

For example, one can interpret a load average of "1.73 0.50 7.98" on a single-CPU system as:

    * during the last minute, the CPU was overloaded by 73% (1 CPU with 1.73 runnable processes, so that 0.73 processes had to wait for a turn)
    * during the last 5 minutes, the CPU was underloaded 50% (no processes had to wait for a turn)
    * during the last 15 minutes, the CPU was overloaded 698% (1 CPU with 7.98 runnable processes, so that 6.98 processes had to wait for a turn)

This means that this CPU could have handled all of the work scheduled for the last minute if it were 1.73 times as fast, or if there were two (1.73 rounded up) times as many CPUs, but that over the last five minutes it was twice as fast as necessary to prevent runnable processes from waiting their turn.

In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run, and each one could be scheduled into a CPU.
 
On Linux systems, the load-average is not calculated on each clock tick, but driven by a variable value that is based on the HZ frequency setting and tested on each clock tick. (HZ variable is the pulse rate of particular Linux kernel activity. 1HZ is equal to one clock tick; 10ms by default.) Although the HZ value can be configured in some versions of the kernel, it is normally set to 100. The calculation code uses the HZ value to determine the CPU Load calculation frequency. Specifically, the timer.c:calc_load() function will run the an algorithm every 5 * HZ, or roughly every five seconds. Following is that function in its entirety:
unsigned long avenrun[3];
 
static inline void calc_load(unsigned long ticks)
{
   unsigned long active_tasks; /* fixed-point */
   static int count = LOAD_FREQ;
 
   count -= ticks;
   if (count < 0) {
      count += LOAD_FREQ;
      active_tasks = count_active_tasks();
      CALC_LOAD(avenrun[0], EXP_1, active_tasks);
      CALC_LOAD(avenrun[1], EXP_5, active_tasks);
      CALC_LOAD(avenrun[2], EXP_15, active_tasks);
   }
}
The avenrun array contains 1-minute, 5-minute and 15-minute average. The CALC_LOAD macro and its associated values are defined in sched.h :
define FSHIFT   11  /* nr of bits of precision */
   define FIXED_1  (1<<FSHIFT) /* 1.0 as fixed-point */
   define LOAD_FREQ (5*HZ) /* 5 sec intervals */
   define EXP_1  1884  /* 1/exp(5sec/1min) as fixed-point */
   define EXP_5  2014  /* 1/exp(5sec/5min) */
   define EXP_15 2037  /* 1/exp(5sec/15min) */
 
   define CALC_LOAD(load,exp,n) \
      load *= exp; \
      load += n*(FIXED_1-exp); \
      load >>= FSHIFT;

2010/02/26

How to enable tun/tap device with nat

OpenVZ supports VPN inside a container via kernel TUN/TAP module and device. To allow container #101 to use the TUN/TAP device the following should be done:

Make sure the tun module has been already loaded on the hardware node:

# lsmod | grep tun

If it is not there, use the following command to load tun module: .
# modprobe tun

To make sure that tun module will be automatically loaded on every reboot you can also add it or into /etc/modules.conf (on RHEL see /etc/sysconfig/modules/ directory) or into /etc/sysconfig/vz-scripts/CTID.mount. (echo 'modprobe tun' >> /etc/sysconfig/vz-scripts/CTID.mount)

Granting container an access to TUN/TAP

Allow your container to use the tun/tap device by running the following commands on the host node:

vzctl set 101 --devices c:10:200:rw --save
vzctl set 101 --capability net_admin:on --save

And create the character device file inside the container (execute the following on the host node):

vzctl exec 101 mkdir -p /dev/net
vzctl exec 101 mknod /dev/net/tun c 10 200
vzctl exec 101 chmod 600 /dev/net/tun

Configuring VPN inside container

After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside container just like on a usual standalone linux box.

The following software can be used for VPN with TUN/TAP:

* Virtual TUNnel (http://vtun.sourceforge.net)
* OpenVPN (http://openvpn.net)

If NAT is needed within the VE, this error will occur on attempts to use NAT:

# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE