пятница, 3 апреля 2015 г.

Output drops due to QoS on 2960/3560/3750 switches

Source, big thanks to author

Do you see incrementing output drops on some interfaces after configuring QoS on your 2960/3560/3750 switch?
Common Scenarios
• Some of the interfaces start experiencing output drops once QoS is configured on the switch.
• Specific applications may experience degraded performance after configuring QoS on the switch. Say IP phones start experiencing choppy calls.
Possible Reason
Once you enable QoS on the switch, some traffic may start getting lesser resources than before (bandwidth or buffer) and hence may get dropped on the switch.


Troubleshooting steps

Step1> Identify the interfaces which carry outgoing data for the affected application or are seeing incrementing output drops. Compare the interface output rate and the interface speed and ensure that the drops are not due to overutilization of the link.

Switch#sh int gi1/0/1
<some output ommitted>
GigabitEthernet0/1 is up, line protocol is up (connected)
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX

!interface speed is 1000 mbps

  input flow-control is off, output flow-control is unsupported
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1089  <<---

!ensure these drops are incrementing

  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 4000 bits/sec, 6 packets/sec
  5 minute output rate 3009880 bits/sec, 963 packets/sec

!output rate is about 3 mbps while the interface speed is 1000 mbps.
Step2> Ensure that QoS is enabled on the switch. If it is not enabled, output drops are not related to QoS and hence further steps mentioned here are irrelevant.

Switch#sh mls qos
QoS is enabled  <<----
QoS ip packet dscp rewrite is enabled

Step3> Identify the marking of the outgoing traffic that is getting dropped on the interface.

Switch#sh mls qos int gi1/0/1 statistics

GigabitEthernet1/0/1 (All statistics are in packets)

  dscp: incoming
-------------------------------

0 -  4 :           0            0            0            0            0
5 -  9 :           0            0            0            0            0
10 - 14 :           0            0            0            0            0
15 - 19 :           0            0            0            0            0
20 - 24 :           0            0            0            0            0
25 - 29 :           0            0            0            0            0
30 - 34 :           0            0            0            0            0
35 - 39 :           0            0            0            0            0
40 - 44 :           0            0            0            0            0
45 - 49 :           0       198910            0            0            0
50 - 54 :           0            0            0            0            0
55 - 59 :           0            0            0            0            0
60 - 64 :           0            0            0            0
  dscp: outgoing
-------------------------------

0 -  4 :           0            0            0            0            0
5 -  9 :           0            0            0            0            0
10 - 14 :           0            0            0            0            0
15 - 19 :           0            0            0            0            0
20 - 24 :           0            0            0            0            0
25 - 29 :           0            0            0            0            0
30 - 34 :           0            0            0            0            0
35 - 39 :           0            0            0            0            0
40 - 44 :           0            0            0            0            0
45 - 49 :           0      248484            0            0            0
50 - 54 :           0            0            0            0            0
55 - 59 :           0            0            0            0            0
60 - 64 :           0            0            0            0
  cos: incoming
-------------------------------

  0 -  4 :           2            0            0            0            0
  5 -  7 :           0            0            0
  cos: outgoing
-------------------------------

  0 -  4 :           0            0            0            0            0
  5 -  7 :           0            0            0
  output queues enqueued:
queue:    threshold1   threshold2   threshold3
-----------------------------------------------
queue 0:           248484      0           0
queue 1:           0           0           0
queue 2:           0           0           0
queue 3:           0           0           0

  output queues dropped:
queue:    threshold1   threshold2   threshold3
-----------------------------------------------
queue 0:       1089           0           0
queue 1:           0           0           0
queue 2:           0           0           0
queue 3:           0           0           0

Policer: Inprofile:            0 OutofProfile:            0

Note: Though you see queue 0-threshold 1 dropping packets, this actually will be queue 1 in further troubleshooting as queue numbering is 1 to 4 in further outputs.


Step4> Check the marking to output-q map on the switch to determine which queue-threshold pair
maps to the marking getting dropped.

In this scenario, queue1-threshold1 is mapped to dscp 46, which is getting dropped on the interface. This means that dscp 46 traffic is being sent to queue1 and is getting dropped because that queue has insufficient buffer or lesser CPU cycles.

Switch#sh mls qos maps dscp-output-q

   Dscp-outputq-threshold map:
     d1 :d2    0     1     2     3     4     5     6     7     8     9
     ------------------------------------------------------------
      0 :    02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01
      1 :    02-01 02-01 02-01 02-01 02-01 02-01 03-01 03-01 03-01 03-01
      2 :    03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01
      3 :    03-01 03-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
      4 :    01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 04-01 04-01
      5 :    04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
      6 :    04-01 04-01 04-01 04-01


Step5> There are two ways to tackle these drops. First method is by changing the buffer and threshold values for the queue dropping packets. second method is to configure the scheduler so that the queue dropping packets is serviced more often than rest of the queues.
First let us see how we can change the buffer and threshold for the affected queues. Let us check the buffer and threshold values associated with the queue identified in step 4.
Note: Each queue set has the option to configure the buffer size and threshold value for the four egress queues. Then, you can apply any one of the queue sets to any of the ports. By default, all interfaces use queue-set 1 for output queues unless explicitly configured to use queue-set 2.

In this scenario queue 1 in queue-set 1 has 25% of the total buffer space and threshold 1 is set to 100%

Switch#sh mls qos queue-set
Queueset: 1
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      25      25      25      25
threshold1:     100     200     100     100
threshold2:     100     200     100     100
reserved  :      50      50      50      50
maximum   :     400     400     400     400
Queueset: 2
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      25      25      25      25
threshold1:     100     200     100     100
threshold2:     100     200     100     100
reserved  :      50      50      50      50
maximum   :     400     400     400     400


Step6> If you wish to change the buffer and threshold values just for the affected interface, please change queue-set 2 and configure the affected interface to use queue-set 2.

Note: You can change queue-set 1 also but as all the interfaces by default use queue-set 1, the change will be reflected to all the interfaces.

In the next step I am changing queue-set 2 so that queue 1 gets 70% of total buffer.
Switch(config)#mls qos queue-set output 2 buffers 70 10 10 10

In next step i am changing queue-set 2, queue 1 thresholds. Both Threshold 1 and Threshold 2 are mapped to 3100 so that they can pull buffer from the reserved pool if required.
Switch(config)#mls qos queue-set output 2 threshold 1 3100 3100 100 3200
Step7> Check if the changes reflect under correct queue and queue-set.

Switch#sh mls qos queue-set
Queueset: 1
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      25      25      25      25
threshold1:     100     200     100     100
threshold2:     100     200     100     100
reserved  :      50      50      50      50
maximum   :     400     400     400     400
Queueset: 2
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      70      10      10      10
threshold1:    3100     100     100     100
threshold2:    3100     100     100     100
reserved  :     100      50      50      50
maximum   :    3200     400     400     400


Step8> Make the affected interface use queue-set 2 so that the changes take effect on this interface.

Switch(config)#int gi1/0/1
Switch(config-if)#queue-set 2
Switch(config-if)#end

Confirm if the interface is mapped to queue-set 2
Switch#sh run int gi1/0/1
interface GigabitEthernet1/0/1
switchport mode access
mls qos trust dscp
queue-set 2
end
Check if the interface is still dropping packets.
Step9> We can also configure the scheduler to increase the rate at which queue 1 will be serviced using the share and shape options. In this example Queue 1 alone will get 50% of the total CPU cycles and rest three queues will collectively get 50% of the CPU cycles.
Switch(config-if)#srr-queue bandwidth share 1 75 25 5

Switch(config-if)#srr-queue bandwidth shape  2  0  0  0
Check if the interface is still dropping packets.
Step10> If the packets are still getting dropped, as a last resort we can enable priority queue on this interface. This will ensure that all the traffic in the priority queue to gets processed before any other queue.
Note:Priority queue is serviced until empty before the other queues are serviced.By default on 2960/3560/3750 switches, queue 1 is the priority queue.

Switch(config)#int gi1/0/1
Switch(config-if)#priority-queue out
Switch(config-if)#end
The marking getting dropped on the interface can be mapped so that it goes to queue 1, which is now the priority queue . In this way we ensure that traffic with this marking always gets processed before anything else.
Switch(config)#mls qos srr-queue output dscp-map queue 1 threshold 1 ?

Update 1 (From official documentation).

  • Queue Map Configuration:
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 1
     threshold 3 5
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 1
     threshold 1 2 4
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 2 
     threshold 2 3
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 2
     threshold 3 6 7
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 3
     threshold 3 0
    Rack1SW1(config)#mls qos srr-queue output cos-map queue 4
     threshold 3 1
    
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 1
     threshold 3  46
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 1  16
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 1  18 20 22
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 1  25
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 1  32
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 1  34 36 38
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 2  24 26
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 2
     threshold 3  48 56
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 3
     threshold 3  0
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 4
     threshold 1  8
    Rack1SW1(config)#mls qos srr-queue output dscp-map queue 4
     threshold 3  10 12 14
    
  • среда, 4 февраля 2015 г.

    Экспорт закрытого ключа ЭЦП из rutoken.

    Источник, благодарю автора.

    1. Предположим у вас уже есть сертификат (электронная подпись) на USB носителе, называется эта штука «Рутокен». Если нет, приобретаем у любого аккредитованного удостоверяющего центра, список см. на сайте zapret-info.gov.ru

    2. Нужно установить сертификат с электронного ключа Рутокен на локальную машину под управлением Windows. Подробно описано тут

    3. Когда сертификат установлен, нужно экспортировать ключ в формате PKCS#12 из криптоконтейнера в Windows с помощью утилиты P12FromGostCSP , в итоге у нас получится файл p12.pfx . Закидываем его на сервер с FreeBSD, Windows нам больше не понадобится

    4. Собираем из исходников openssl с поддержкой алгоритмов нашего ГОСТ. В openssl.cnf:

    В самом начале, до начала всех секций:
    openssl_conf = openssl_def
    В конце новые секции:
    [openssl_def]
    engines=engine_section
    [engine_section]
    gost=gost_section
    [gost_section]
    engine_id=gost
    default_algorithms=ALL
     5. Преобразуем файл p12.pfx из формата PKCS#12 в PEM вот такой командой:
     openssl pkcs12 -in p12.pfx -out p12.pem -nodes -clcerts

    вторник, 11 ноября 2014 г.

    Замена жесткого диска в программном RAID1 в операционной системе Linux

    Источник, спасибо автору поста.

    Исходные данные
     Имеем два жестких диска: /dev/sda и /dev/sdb. Из них созданы четыре программных RAID-массива:
    • /dev/md0 - swap
    • /dev/md1 - /boot
    • /dev/md2 - /
    • /dev/md3 - /data

     Для получения информации о состоянии массивов выполняем:
    # cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md3 : active raid1 sda4[0] sdb4[1]
         1822442815 blocks super 1.2 [2/2] [UU]
    
    md2 : active raid1 sda3[0] sdb3[1]
         1073740664 blocks super 1.2 [2/2] [UU]
    
    md1 : active raid1 sda2[0] sdb2[1]
         524276 blocks super 1.2 [2/2] [UU]
    
    md0 : active raid1 sda1[0] sdb1[1]
         33553336 blocks super 1.2 [2/2] [UU]
    
    unused devices: 

    О том, что массивы в порядке, указывает наличии двух букв U в квадратных кавычках каждого массива - [UU]. Если массив поврежден, буква U меняется на _. Для данного примера:
    • [_U] - поврежден /dev/sda
    • [U_] - поврежден /dev/sdb

    # cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md3 : active raid1 sda4[0] sdb4[1](F)
          1822442815 blocks super 1.2 [2/1] [U_]
    
    md2 : active raid1 sda3[0] sdb3[1](F)
          1073740664 blocks super 1.2 [2/1] [U_]
    
    md1 : active raid1 sda2[0] sdb2[1](F)
          524276 blocks super 1.2 [2/1] [U_]
    
    md0 : active raid1 sda1[0] sdb1[1](F)
          33553336 blocks super 1.2 [2/1] [U_]
    
    unused devices: 

    Массивы не сихронизированы и виновен в этом сбойный диск /dev/sdb, будем его менять.

    Удаление поврежденного жесткого диска
     Перед установкой нового жесткого диска необходимо удалить из массива поврежденный диск. Для этого выполняем следующую последовательность команд:
    # mdadm /dev/md0 -r /dev/sdb1
    # mdadm /dev/md1 -r /dev/sdb2
    # mdadm /dev/md2 -r /dev/sdb3
    # mdadm /dev/md3 -r /dev/sdb4

    Бывают ситуации, когда не все программные RAID-массивы повреждены:
    # cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sda4[0] sdb4[1](F)
          1822442815 blocks super 1.2 [2/1] [U_]
    
    md2 : active raid1 sda3[0] sdb3[1](F)
          1073740664 blocks super 1.2 [2/1] [U_]
    
    md1 : active raid1 sda2[0] sdb2[1](F)
          524276 blocks super 1.2 [2/1] [U_]
    
    md0 : active raid1 sda1[0] sdb1[1]
          33553336 blocks super 1.2 [2/1] [UU]
    
    unused devices: 

    В таком случае не удастся удалить рабочий раздел из массива. Необходимо сначала пометить его как сбойный, а только потом удалять:
    # mdadm /dev/md0 -f /dev/sdb1
    # cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md3 : active raid1 sda4[0] sdb4[1](F)
          1822442815 blocks super 1.2 [2/1] [U_]
    
    md2 : active raid1 sda3[0] sdb3[1](F)
          1073740664 blocks super 1.2 [2/1] [U_]
    
    md1 : active raid1 sda2[0] sdb2[1](F)
          524276 blocks super 1.2 [2/1] [U_]
    
    md0 : active raid1 sda1[0] sdb1[1](F)
          33553336 blocks super 1.2 [2/1] [U_]
    
    unused devices:
    # mdadm /dev/md0 -r /dev/sdb1
    # mdadm /dev/md1 -r /dev/sdb2
    # mdadm /dev/md2 -r /dev/sdb3
    # mdadm /dev/md3 -r /dev/sdb4

    Подготовка нового жесткого диска
    Оба диска в массиве должны иметь абсолютно одинаковое разбиение. В зависимости от используемого типа таблицы разделов (MBR или GPT) необходимо использовать соответствующие утилиты для копирования таблицы разделов.
    Для жесткого диска с MBR используем утилиту sfdisk:
    #sfdisk -d /dev/sda | sfdisk --force /dev/sdb

    где /dev/sda - диск источник, /dev/sdb - диск назначения.
    Для жесткого диска с GPT используем утилиту sgdisk из GPT fdisk:
    #sgdisk -R /dev/sdb /dev/sda
    #sgdisk -G /dev/sdb

    где /dev/sda - диск источник, /dev/sdb - диск назначения. Вторая строка назначает новому жесткому диску случайный UUID.

    Добавление нового жесткого диска
    Осталось добавить новый, размеченный жесткий диск в массивы и установить на нем загрузчик:
    # mdadm /dev/md0 -a /dev/sdb1
    # mdadm /dev/md1 -a /dev/sdb2
    # mdadm /dev/md2 -a /dev/sdb3
    # mdadm /dev/md3 -a /dev/sdb4

    После этого начнется процесс синхронизации. Время синхронизации зависит от объема жесткого диска:
    # cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sdb4[1] sda4[0]
         1028096 blocks [2/2] [UU]
         [==========>..........]  resync =  50.0% (514048/1028096) finish=97.3min speed=65787K/sec
    
    md2 : active raid1 sdb3[1] sda3[0]
         208768 blocks [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
         2104448 blocks [2/2] [UU]
    
    md0 : active raid1 sdb1[1] sda1[0]
         208768 blocks [2/2] [UU]
    
    unused devices: 

    Если в системе используется загрузчик GRUB2 достаточно выполнить следующие команды (при этом нет необходимости дожидаться окончания процесса синхронизации):
    #grub-install /dev/sdb
    #update-grub 

    После окончания синхронизации можете вздохнуть спокойно - ваши данные опять в безопасности.

    понедельник, 27 октября 2014 г.

    Mysql: обновление timezone

    Источник, как всегда авторам благодарность.

    Чтобы проверить текущую временную зону, нужно выполнить команду:

    SHOW VARIABLES LIKE '%zone%';
    SELECT @@global.time_zone, @@session.time_zone;

    Чтобы посмотреть текущее время сервера MySQL:



    select current_timestamp();

    Прописать в конфигурационном файле timezone можно следующим способом (в таком случае потребуется перезагрузка):

    /etc/my.cnf
    default-time-zone = "Europe/Moscow"

    Можно поменять время и без перезагрузки, для этого сначала перенесем системные тайм зоны в MySQL следующим способом:

    mysql_tzinfo_to_sql /usr/share/zoneinfo |mysql -u root mysql -p

    Далее, мы уже можем обновить временную зону без появления ошибок типа:

    ERROR 1298 (HY000): Unknown or incorrect time zone:

    Выполним обновление time_zone:

    SET GLOBAL time_zone = 'Europe/Moscow';
    SET time_zone = 'Europe/Moscow';

    В MySQL также можно использовать системное время, это наверное даже лучше. Чтобы изменить текущее системное время на сервере, нужно сделать:

    cp /usr/share/zoneinfo/Europe/Moscow /etc/localtime
    Чтобы использовалось системное время, в MySQL нужно выполнить

    SET GLOBAL time_zone = 'SYSTEM';
    SET time_zone = 'SYSTEM';

    пятница, 24 октября 2014 г.

    Overriding the default Linux kernel 20-second TCP socket connect timeout

    Source, thanks to author.

    Whatever language or client library you're using, you should be able to set the timeout on network socket operations, typically split into a connect timeout, read timeout, and write timeout.
    However, although you should be able to make these timeouts as small as you want, the connect timeout in particular has an effective maximum value for any given kernel. Beyond this point, higher timeout values you might request will have no effect - connecting will still time out after a shorter time.
    The reason TCP connects are special is that the establishment of a TCP connection has a special sequence of packets starting with a SYN packet. If no response is received to this initial SYN packet, the kernel needs to retry, which it may have to do a couple of times. All kernels I know of wait an increasing amount of time between sending SYN retries, to avoid flooding slow hosts.
    All kernels put an upper limit on the number of times they will retry SYNs. On BSD-derived kernels, including Mac OS X, the standard pattern is that the second SYN will be second 6 seconds after the first, then a third SYN 18 seconds after that, then the connect times out after a total of around 75 seconds.
    On Linux however, the default retry cycle ends after just 20 seconds. Linux does send SYN retries somewhat faster than BSD-derived kernels - Linux supposedly sends 5 SYNs in this 20 seconds, but this includes the original packet (the retries are after 3s, 6s, 12s, 24s).
    The end result though is that if your application wants a connect timeout shorter than 20s, no problem, but if your application wants a connect timeout longer than 20s, you'll find that the default kernel configuration will effectively chop it back to 20s.
    Changing this upper timeout limit is easy, though it requires you to change a system configuration parameter and so you will need to have root access to the box (or get the system administrators to agree to change it for you).
    The relevant sysctl is tcp_syn_retries, which for IP v4 is net.ipv4.tcp_syn_retries.
    Be conservative in choosing the value you change it to. Like BSD, the SYN retry delays increase in time (albeit doubling rather than tripling), so a relatively small increase in the number of retries leads to a large increase in the maximum connect timeout. In a perfect world, there would be no problem with having a very high timeout because applications' connect timeouts will come into play.
    However, many applications do not set an explicit connect timeout, and so if you set the kernel to 10 minutes, you're probably going to find something hanging for ages sooner or later when a remote host goes down!
    I recommend that you set it to a value of 6, 7, or at most 8. 6 gives an effective connect timeout ceiling of around 45 seconds, 7 gives around 90 seconds, and 8 gives around 190 seconds.
    To change this in a running kernel, you can use the /proc interface:
    # cat /proc/sys/net/ipv4/tcp_syn_retries 
    5
    # echo 6 > /proc/sys/net/ipv4/tcp_syn_retries 
    
    Or use the sysctl command:
    # sysctl net.ipv4.tcp_syn_retries
    net.ipv4.tcp_syn_retries = 5
    # sysctl -w net.ipv4.tcp_syn_retries=6
    net.ipv4.tcp_syn_retries = 6
    
    To make this value stick across reboots however you need to add it to /etc/sysctl.conf:
    net.ipv4.tcp_syn_retries = 6
    Most Linux installations support reading sysctls from files in /etc/sysctl.d, which is usually better practice as it makes it easier to administer upgrades, so I suggest you put it in a file there instead.
    (I see no reason you'd want to reduce this sysctl, but note that values of 4 or less all seem to be treated as 4 - total timeout 9s.)

    четверг, 23 октября 2014 г.

    How to set time zone in VPS on node (OpenVZ).

    Source, thanks to author
    Solution ::-

    Please follows the below steps to set the time zone for a particular node in VPS.

    1) Login to the main server node via ssh.

    2) Stop the node(container) which you want to set time.
    ------------------
    # vzctl stop 1000 >>>>> 1000 = Container ID
    ------------------

    3) Set the container to have capability to change the time zone.
    ------------------
    # vzctl set 1000 --capability sys_time:on --save
    ------------------

    4) Start the container and login to it.
    ------------------
    # vzctl start 1000
    # vzctl enter 1000
    ------------------

    5) Change your local timezone with below process.
    ------------------
    # mv /etc/localtime /etc/localtime_bk
    # ln -s /usr/share/zoneinfo/America/Chicago /etc/localtime
    ------------------

    6) Set the date and time
    ------------------
    # date 051717302013
    time has been set to 17:30 on 17th may 2013
    (05-Month, 17-Day, 5-Hours, 31-Minutes, 2013 -Year
    ------------------

    среда, 13 августа 2014 г.

    ZoneMinder 1.27, CentOS 6.5, война за работоспособность

    1) Установка CentOS 6.5, обновление, отключение SELinux, iptables и прочего не нужного.
    2) Подключение sourceforge, epel, rpmfusion репозитариев.
    3) Увеличение kernel.shmax до 256МБ (на время тестирования, возможно понадобится еще больше).
    4) Установка пакетов ПО:
    yum install gcc gcc-c++ wget mysql-devel mysql-server php php-mysql php-pear php-pear-DB php-mbstring bison bison-devel httpd make ncurses ncurses-devel libtermcap-devel sox newt-devel libxml2-devel libtiff-devel php-gd audiofile-devel gtk2-devel libv4l-devel ffmpeg ffmpeg-devel zlib zlib-devel openssl openssl-devel gnutls-devel php-process perl-Time-HiRes perl-CPAN pcre-devel libjpeg-devel perl-Date-Manip perl-libwww-perl perl-Module-Load perl-Net-SFTP-Foreign perl-Archive-Tar perl-Archive-Zip perl-Expect perl-MIME-Lite perl-Device-SerialPort perl-Sys-Mmap perl-MIME-tools bzip2-devel phpMyAdmin zip
    4.1) Костыли из-за ffmpeg, установленного из fusion:
     ln -s /usr/include/ffmpeg/libavcodec /usr/include/libavcodec
     ln -s /usr/include/ffmpeg/libavdevice /usr/include/libavdevice
     ln -s /usr/include/ffmpeg/libavfilter /usr/include/libavfilter
     ln -s /usr/include/ffmpeg/libavformat /usr/include/libavformat
     ln -s /usr/include/ffmpeg/libavutil /usr/include/libavutil
     ln -s /usr/include/ffmpeg/libpostproc /usr/include/libpostproc
     ln -s /usr/include/ffmpeg/libswresample /usr/include/libswresample
     ln -s /usr/include/ffmpeg/libswscale /usr/include/libswscale

    5) Скачать архив с исходными текстами ПО, распаковать, куда будет удобно, зайти в каталог с распакованными исходниками:
    bootstrap.sh

    CXXFLAGS=-D__STDC_CONSTANT_MACROS ./configure --with-webdir=/var/www/html/zm --with-cgidir=/var/www/cgi-bin --with-webuser=apache --with-webgroup=apache ZM_DB_HOST=localhost ZM_DB_NAME=zm ZM_DB_USER=YOURZMUSER ZM_DB_PASS=YOURZMPASSWORD ZM_SSL_LIB=openssl --with-extralibs="-L/usr/lib64 -L/usr/lib64/mysql -L/usr/local/lib" --with-libarch=lib64 --with-ffmpeg --enable-mmap=yes

    make
    service mysqld start
    mysql_secure_installation
    mysql -u root -p

    create database zm;
    CREATE USER 'YOURZMUSER'@'localhost' IDENTIFIED BY 'YOURZMPASSWORD';
    grant CREATE, INSERT, SELECT, DELETE, UPDATE on zm.* to YOURZMUSER@localhost;
    FLUSH PRIVILEGES;
    exit

    make install

    chkconfig mysqld on
    chkconfig httpd on

    mysql -u root -p zm < ./db/zm_create.sql

    cp ./scripts/zm /etc/init.d/
    chmod +x /etc/init.d/zm
    chkconfig zm on

    cd /var/www/html/zm
    wget http://www.zoneminder.com/sites/zoneminder.com/downloads/cambozola.jar
    chown apache:apache /var/www/html/zm/cambozola.jar

    nano /etc/php.ini
    short_open_tag = On

    service httpd restart
    service zm start
     6) Проверить доступность web-интерфейса zm.