пятница, 12 мая 2017 г.

kafka: decrease topic replication factor

Внезапно пришлось редусить кластер кафки с 5 до 3 нод. Все топики были с репликейшн фактор 3, что на 3х нодах расточительно. Просто так нельзя взять и уменьшить его, но есть прикольный, хоть и чуть геморный способ через ручное назначение партиций.
Итак, порядок примерно такой:
1. делаем json со списком топиков.
2. генерим для json с распределением партиций по нодам (можно например указать только оставшие 3 брокера, если мы еще не убили 2 оставшихся и не перенесли партиции)
3. удаляем из него по одной ноде для каждой партиции
4. скармливаем json кафке
5. проверяем, что в дескрайбе фактор стал 2.

Пример на тестовом топике, заодно выводим из работы из 3 брокеров одного:

root@kafka1:~# /opt/kafka/bin/kafka-topics.sh --zookeeper zoo1:2181/kafka --create --topic repl-test --replication-factor 3 --partitions 8
Created topic "repl-test".
root@kafka1:~# /opt/kafka/bin/kafka-topics.sh --zookeeper zoo1:2181/kafka --describe --topic repl-test
Topic:repl-test PartitionCount:8 ReplicationFactor:3 Configs:
Topic: repl-test Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: repl-test Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: repl-test Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: repl-test Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: repl-test Partition: 4 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: repl-test Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: repl-test Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: repl-test Partition: 7 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
root@kafka1:~# vi topic.json
root@kafka1:~# cat topic.json

{"topics":
[{"topic": "repl-test"}],
"version":1
}
root@kafka1:~# /opt/kafka/bin/kafka-reassign-partitions.sh --zookeeper zoo1:2181/kafka --topics-to-move-json-file topic.json --broker-list "1,2,3" --generate
Current partition replica assignment

{"version":1,"partitions":[{"topic":"repl-test","partition":2,"replicas":[2,3,1]},{"topic":"repl-test","partition":7,"replicas":[1,2,3]},{"topic":"repl-test","partition":5,"replicas":[2,1,3]},{"topic":"repl-test","partition":3,"replicas":[3,2,1]},{"topic":"repl-test","partition":0,"replicas":[3,1,2]},{"topic":"repl-test","partition":4,"replicas":[1,3,2]},{"topic":"repl-test","partition":1,"replicas":[1,2,3]},{"topic":"repl-test","partition":6,"replicas":[3,1,2]}]}
Proposed partition reassignment configuration

{"version":1,"partitions":[{"topic":"repl-test","partition":2,"replicas":[2,3,1]},{"topic":"repl-test","partition":7,"replicas":[1,2,3]},{"topic":"repl-test","partition":5,"replicas":[2,1,3]},{"topic":"repl-test","partition":3,"replicas":[3,2,1]},{"topic":"repl-test","partition":0,"replicas":[3,1,2]},{"topic":"repl-test","partition":4,"replicas":[1,3,2]},{"topic":"repl-test","partition":1,"replicas":[1,2,3]},{"topic":"repl-test","partition":6,"replicas":[3,1,2]}]}
root@kafka1:~# vi tmppart.json
root@kafka1:~# cat tmppart.json

{"version":1,"partitions":[{"topic":"repl-test","partition":2,"replicas":[2,3,1]},{"topic":"repl-test","partition":7,"replicas":[1,2,3]},{"topic":"repl-test","partition":5,"replicas":[2,1,3]},{"topic":"repl-test","partition":3,"replicas":[3,2,1]},{"topic":"repl-test","partition":0,"replicas":[3,1,2]},{"topic":"repl-test","partition":4,"replicas":[1,3,2]},{"topic":"repl-test","partition":1,"replicas":[1,2,3]},{"topic":"repl-test","partition":6,"replicas":[3,1,2]}]}                                                                                                  
root@kafka1:~# cat tmppart.json| python -m json.tool > part_cur.json root@kafka1:~# cat part_cur.json                                                
{                                                                                                                                                                                           
"partitions": [                                                                                                                                                                             
{                                                                                                                                                                                           
"partition": 2,                                                                                                                                                                             
"replicas": [                                                                                                                                                                               
2,                                                                                                                                                                                          
3,                                                                                                                                                                                          
1                                                                                                                                                                                           
],                                                                                                                                                                                          
"topic": "repl-test"                                                                                                                                                                        
},                                                                                                                                                                                          
{                                                                                                                                                                                           
"partition": 7,                                                                                                                                                                             
"replicas": [                                                                                                                                                                               
1,                                                                                                                                                                                          
2,
3
],
"topic": "repl-test"
},
{
"partition": 5,
"replicas": [
2,
1,
3
],
"topic": "repl-test"
},
{
"partition": 3,
"replicas": [
3,
2,
1
],
"topic": "repl-test"
},
{
"partition": 0,
"replicas": [
3,
1,
2
],
"topic": "repl-test"
},
{
"partition": 4,
"replicas": [
1,
3,
2
],
"topic": "repl-test"
},
{
"partition": 1,
"replicas": [
1,
2,
3
],
"topic": "repl-test"
},
{
"partition": 6,
"replicas": [
3,
1,
2
],
"topic": "repl-test"
}
],
"version": 1
}
root@kafka1:~# cp part_cur.json part_new.json
root@kafka1:~# vi part_new.json
root@kafka1:~# cat part_new.json

{
"partitions": [
{
"partition": 2,
"replicas": [
2,
1
],
"topic": "repl-test"
},
{
"partition": 7,
"replicas": [
1,
2
],
"topic": "repl-test"
},
{
"partition": 5,
"replicas": [
2,
1
],
"topic": "repl-test"
},
{
"partition": 3,
"replicas": [
2,
1
],
"topic": "repl-test"
},
{
"partition": 0,
"replicas": [
1,
2
],
"topic": "repl-test"
},
{
"partition": 4,
"replicas": [
1,
2
],
"topic": "repl-test"
},
{
"partition": 1,
"replicas": [
1,
2
],
"topic": "repl-test"
},
{
"partition": 6,
"replicas": [
1,
2
],
"topic": "repl-test"
}
],
"version": 1
}
root@kafka1:~# /opt/kafka/bin/kafka-reassign-partitions.sh --zookeeper zoo1:2181/kafka --reassignment-json-file part_new.json --verify
Status of partition reassignment:
ERROR: Assigned replicas (3,1,2) don't match the list of replicas for reassignment (1,2) for partition [repl-test,0]
ERROR: Assigned replicas (1,3,2) don't match the list of replicas for reassignment (1,2) for partition [repl-test,4]
ERROR: Assigned replicas (2,3,1) don't match the list of replicas for reassignment (2,1) for partition [repl-test,2]
ERROR: Assigned replicas (1,2,3) don't match the list of replicas for reassignment (1,2) for partition [repl-test,7]
ERROR: Assigned replicas (3,1,2) don't match the list of replicas for reassignment (1,2) for partition [repl-test,6]
ERROR: Assigned replicas (2,1,3) don't match the list of replicas for reassignment (2,1) for partition [repl-test,5]
ERROR: Assigned replicas (3,2,1) don't match the list of replicas for reassignment (2,1) for partition [repl-test,3]
ERROR: Assigned replicas (1,2,3) don't match the list of replicas for reassignment (1,2) for partition [repl-test,1]
Reassignment of partition [repl-test,0] failed
Reassignment of partition [repl-test,4] failed
Reassignment of partition [repl-test,2] failed
Reassignment of partition [repl-test,7] failed
Reassignment of partition [repl-test,6] failed
Reassignment of partition [repl-test,5] failed
Reassignment of partition [repl-test,3] failed
Reassignment of partition [repl-test,1] failed
root@kafka1:~# ((
> ^C
root@kafka1:~# /opt/kafka/bin/kafka-topics.sh --zookeeper zoo1:2181/kafka --describe --topic repl-test
Topic:repl-test PartitionCount:8 ReplicationFactor:3 Configs:
Topic: repl-test Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: repl-test Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: repl-test Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: repl-test Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: repl-test Partition: 4 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: repl-test Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: repl-test Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: repl-test Partition: 7 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
root@kafka1:~# /opt/kafka/bin/kafka-reassign-partitions.sh --zookeeper zoo1:2181/kafka --reassignment-json-file part_new.json --execute
Current partition replica assignment


{"version":1,"partitions":[{"topic":"repl-test","partition":2,"replicas":[2,3,1]},{"topic":"repl-test","partition":7,"replicas":[1,2,3]},{"topic":"repl-test","partition":5,"replicas":[2,1,3]},{"topic":"repl-test","partition":3,"replicas":[3,2,1]},{"topic":"repl-test","partition":0,"replicas":[3,1,2]},{"topic":"repl-test","partition":4,"replicas":[1,3,2]},{"topic":"repl-test","partition":1,"replicas":[1,2,3]},{"topic":"repl-test","partition":6,"replicas":[3,1,2]}]}


Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions {"version":1,"partitions":[{"topic":"repl-test","partition":0,"replicas":[1,2]},{"topic":"repl-test","partition":4,"replicas":[1,2]},{"topic":"repl-test","partition":2,"replicas":[2,1]},{"topic":"repl-test","partition":7,"replicas":[1,2]},{"topic":"repl-test","partition":6,"replicas":[1,2]},{"topic":"repl-test","partition":5,"replicas":[2,1]},{"topic":"repl-test","partition":3,"replicas":[2,1]},{"topic":"repl-test","partition":1,"replicas":[1,2]}]}
root@kafka1:~# /opt/kafka/bin/kafka-topics.sh --zookeeper zoo1:2181/kafka --describe --topic repl-test
Topic:repl-test PartitionCount:8 ReplicationFactor:2 Configs:
Topic: repl-test Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: repl-test Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: repl-test Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: repl-test Partition: 3 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: repl-test Partition: 4 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: repl-test Partition: 5 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: repl-test Partition: 6 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: repl-test Partition: 7 Leader: 1 Replicas: 1,2 Isr: 1,2

четверг, 20 ноября 2014 г.

и опять часики

На ноде ставим часы в UTC:
# cat /etc/sysconfig/clock
UTC=yes
ZONE="Europe/Moscow"

Смотрим, что системное время правильное с правильное зоной и впихиваем его в hw часы:
# hwclock --systohc --utc

Проверяем, что в конфиге машины:
<clock offset='utc' adjustment='reset'/>

Если на машине часы все равно уехавшие, то надо выставить системное, по ntp, например, и они уже никуда не денутся и будут верными с верной зоной.

пятница, 5 сентября 2014 г.

В VPS не работает conntrack

Чел добавляет правила с -m state и они не добавляются.

vzctl stop 99111
vzctl set 99111 --save --iptables ipt_conntrack --iptables ipt_state --iptables iptable_filter --iptables ip_tables
vzctl start 99111

та-дам ))

Не грузится RAID контроллер после загрузки FC контроллера

Мать  X8DTU-LN4F+, стоят контроллеры FC emulex и RAID adaptec 6405.
Проблема - после загрузки FC контроллера RAID даже не пытается подняться, как следствие - сервер не видит диски и не грузится.
Однако если при загрузке жмакнуть "выбор загрузочного устройства" - все прогружается и адаптек собственно является единственным вариантом.

В эмулексе загрузка с сана отключена.
Попробовал поменять местами железки местами - 0.

Перерыл все, что связанно с загрузкой в биосе, ничего нет.
В итоге стал тыркать режимы PCI, и внезапно при принудительном режиме x8/x8 оно заработало как надо:



четверг, 4 сентября 2014 г.

FC id

Выдернуть wwid с FC карточки:
systool -c fc_host -v
В выводе среди прочего будет что-то типа:
port_name           = "0x21000024ffxxxxxx"

вторник, 2 сентября 2014 г.

Удаление зависимостей

Cнес к чертям 2 онлайн гамы, в которые играл больше года. Вот так вот )

среда, 20 августа 2014 г.

Рекурсивный рерайт на CGI

Сделал челу пых как cgi (без супхп), у него стала валиться 500 ошибка из-за косяка с рерайтами:
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace
r->uri = /cgi-bin/php/index.php
redirected from r->uri = /index.php
redirected from r->uri = /cgi-bin/php/index.php
redirected from r->uri = /index.php
redirected from r->uri = /cgi-bin/php/index.php
redirected from r->uri = /index.php
redirected from r->uri = /cgi-bin/php/index.php
redirected from r->uri = /index.php
redirected from r->uri = /cgi-bin/php/index.php
redirected from r->uri = /index.php
redirected from r->uri = /

Сделал отдельный рерайт для cgi-bin/php, помогло:
RewriteCond %{REQUEST_URI} ^/cgi-bin/php(.*)
RewriteRule . - [L]

среда, 13 августа 2014 г.

Замена диска в софтовом md рэйде

Чтобы в маны не лазить:

# sfdisk -d /dev/sdb > sdb.out
# sfdisk /dev/sdc < sdb.out
# mdadm --manage /dev/md0 --remove /dev/sda2
# mdadm --manage /dev/md1 --add /dev/sdc2
# cat /proc/mdstat

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2](F) sdb3[1]
      273659136 blocks [2/1] [_U]
      bitmap: 39/131 pages [156KB], 1024KB chunk

md0 : active raid1 sdc2[2] sdb2[1]
      8193024 blocks [2/1] [_U]
      [>....................]  recovery =  1.5% (128000/8193024) finish=2.0min speed=64000K/sec
      bitmap: 0/126 pages [0KB], 32KB chunk

md1 : active raid1 sdb1[1] sda1[2](F)
      30716160 blocks [2/1] [_U]
      bitmap: 48/235 pages [192KB], 64KB chunk

# mdadm --manage /dev/md1 --remove /dev/sda1
# mdadm --manage /dev/md1 --add /dev/sdc1
# mdadm --manage /dev/md2 --remove /dev/sda3
# mdadm --manage /dev/md2 --add /dev/sdc3

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdc3[2] sdb3[1]
      273659136 blocks [2/1] [_U]
        resync=DELAYED
      bitmap: 39/131 pages [156KB], 1024KB chunk

md0 : active raid1 sdc2[0] sdb2[1]
      8193024 blocks [2/2] [UU]
      bitmap: 0/126 pages [0KB], 32KB chunk

md1 : active raid1 sdc1[2] sdb1[1]
      30716160 blocks [2/1] [_U]
      [=====>...............]  recovery = 28.7% (8835968/30716160) finish=6.7min speed=54126K/sec
      bitmap: 48/235 pages [192KB], 64KB chunk

воскресенье, 3 августа 2014 г.

Centos 5: from baremetal to Xen domU

Понадобилось перенести железный сервер с Centos 5 в ксен. Как обычно сделал лвмку, засинкал, поправил fstab, grub - не грузится. Ядро не катит.
Вспомнил, что pvops в Centos 5 не было, решил в чруте на лвмке поставить ксеновое ядро. Да вот хрен там. Видимо, с инитрд косяк. Тогда поступил так:
1. на исходном сервере установил ксеновое ядро.
2. отсинкал сервер заново
3. поправил fstab, поправил grub.conf, приведя ксеновую секцию к виду:
title CentOS (2.6.18-371.11.1.el5xen)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.18-371.11.1.el5xen ro root=/dev/xvda1
        initrd /boot/initrd-2.6.18-371.11.1.el5xen.img
4. на всякий пожарный пересобрал инитрд
5. поправил /etc/modprobe.conf:
alias eth0 xennet
alias eth1 xennet
alias scsi_hostadapter xenblk
#alias eth0 e1000
#alias eth1 e1000
#alias eth2 e1000
#alias scsi_hostadapter aacraid
#alias scsi_hostadapter1 ata_piix
После этого виртуалка поднялась.

UPDATE

Проделал то же самое с ещё одной машиной - не поднялась. Паника:

XENBUS: Device with no driver: device/console/0
XENBUS: Device with no driver: device/vbd/51713
XENBUS: Device with no driver: device/vbd/51714
XENBUS: Device with no driver: device/vif/0
XENBUS: Device with no driver: device/vif/1
Initalizing network drop monitor service
Write protecting the kernel read-only data: 506k
USB Universal Host Controller Interface driver v3.0
SCSI subsystem initialized
Adaptec aacraid driver 1.1-5[24702]
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.11.6-ioctl (2011-02-18) initialised: dm-devel@redhat.com
device-mapper: dm-raid45: initialized v0.2594l
Kernel panic - not syncing: Attempted to kill init!
Решил так:
mount /dev/vg_vm/s9 /1
mount --bind /proc /1/proc/
mount --bind /sys /1/sys
chroot /1/
yum erase kernel-xen
yum install kernel-xen
vi /boot/grub/grub.conf
 Видимо, руками инитрд плохо пересобрал

Xen: Freebsd 10 install

Как поставить на ксен фряху:
virt-install -n testbsd -r 512 --vcpus=1 -v --hvm --cdrom /root/FreeBSD-10.0-RELEASE-amd64-disc1.iso --disk path=/dev/vg_vm/testbsd --vnc --network=bridge:br0

После чего заходим по vnc и просто сетапим )