Wednesday, June 27, 2012

Restart puppet using puppet

If you want to ensure puppet upgrade via a puppet manifest you may run into trouble when it comes to puppet service restart.

When puppet stops the service it will kill the actual running puppet process.
Afterwards puppet does not do the start command.




You can solve this by using the following service resource:


service { 'puppet':
  enable   => true,
  ensure   => running,
  restart  => '/usr/bin/nohup /etc/init.d/puppet restart &',
}


This will fork a puppet restart command in background.

Monday, May 14, 2012

Puppet 2.7.x and Debian ruby 1.9.1

According to puppetlabs one should stay with ruby 1.8.7 when running puppet - Puppet FAQ and supported Ruby versions.

Most things work, but the CA and SSL has an issue when running ruby 1.9.x - Mixing and matching ruby versions for puppetmasterd and puppetd causes a "certificate verify failed" error.

The solution is to create a symlink in the ssl certs directory as described here.

Puppetlabs removes dynamic variable scoping

When you run puppet 2.7.12 or higher you will see a warning message, telling you that dynamic variable scope look-up will be removed in puppet 2.8.x.

What to do if you have variables defined in a node definition?

  1. switch to hiera
  2. write a mockup





A mockup can be build in the following way:

  1. write a new module with a define
  2. use the define and add the proper parameter

Why do you have to do it that ugly way? Is there no way to give the scope of a node definition?

No. Nodes do not have any scope. Scope is limited to modules and classes.


class base_variable ( stage = first, dummyvariable ) {
  case $dummyvariable {
     'live': { $truevariable = 'live' }
     'testing': { $truevariable = 'testing' }
     'testing2': { $truevariable = 'testing2' }
  }
}

In node definition you can now use the define:

node 'default.server.domain.tld' {
        include stages
        class { base_variable:
                dummyvariable => 'testing2',
        }
        include base::dev
        ....
}


Monday, August 16, 2010

Debian/GNU Squeeze with Xen 4.0 - Part 1 - base system

This is the first post regarding Debian/GNU Linux running Xen 4.0 on a new piece of hardware.

This posting will cover base OS installation:
- Preparation/Partitioning
- Base Debian/GNU Linux installation on Dom0
- Xen packages installation
- Make Xen-Kernel default for Grub2

Upcoming posts will cover the following items:

- Installation of Debian/GNU Linux as DomU
- Installation of Windows Vista as DomU (using HVM)
- Installation of OpenSolaris 11 as DomU (using HVM)
- Installation of BDS systems as DomU



Hardware description:

Motherboard: Intel DQ57TM
CPU: Intel Core i7
RAM: 16 GB RAM
HDD: 2x500 GB SATA (internal)

Cost (Aug 2010): 1400 Euro

Partitioning:

I want RAID 1 (mirroring) on both disks.
I want RAID 1 for / file system and swap
I want LVM on RAID 1 for all Virtual Guest systems

Trouble:

The normal Debian/GNU Linux Lenny (stable) network installation CD does not recognize the network interface (e10000).
Base installation required Debian /GNU Linux Squeeze testing network installation cdrom.

During Partitioning:

- Create empty DOS partition label on both disks
- Add RAID Partition for /
- Add RAID Partition for swap
- Add RAID Partition for LVM

/dev/sd[a|b]1 - 15 GB - Linux raid autodetect
/dev/sd[a|b]2 - 1 GB - Linux raid autodetect
/dev/sd[a|b]3 - 471 GB - Linux raid autodetect

Within the Debian installer I choosed to not use any partitions on my empty LVM volume.
Logical Volumes will be created after Xen setup.

Finish base installation. Do not select any default installation method in taksel.

Installation of Xen Kernel


After first boot into new installed system I installed xen-linux-system-2.6.32-5-xen-amd64:


apt-get install xen-linux-system-2.6.32-5-xen-amd64


Now I rebooted into Xen Kernel. Since Debian/GNU Linux Version squeeze uses grub2 I reviewed the list of available Kernels and wrote down the number of the Xen Kernel.

Then I added "GRUB_DEFAULT=6" into /etc/default/grub and run update-grub afterwards.

Network Configuration

Since I want my Virtual Systems to be available from remote I decided to use Bridging on the network interface.

In /etc/network/interfaces I setup eth0 as bridge slave.
Then I added a new bridge interface with name xenbr0.

In /etc/xen/xend-config.spi I disabled the bridge-network section by giving /bin/true as parameter.


Sunday, March 1, 2009

KDE 4.2 on Debian/GNU Lenny

I was eagerly waiting for KDE 4.2.

Compared to Versoin 4.1 the new version seems far more stable.

The only "semi-official" way to install KDE 4.2 on Lenny is by putting experimental in your sources list and make use of apt-pinning.

Personally I do not like apt-pinning.

Therefor I made a backport.


Put the following into your source.list:

   deb http://www.debian-desktop.org/pub/linux/kde42 lenny kde42


If you like to review the sources:

   deb-src http://www.debian-desktop.org/pub/linux/kde42 lenny kde42

The packages are - yet - unsigned.
Hopefully I will have some time within the next weeks to also sign the packages.



Friday, February 27, 2009

Mac OS X repair disk without boot CD

It will always happen, when you have left your installation CD at home: diskutil is reporting errors which have to be fixed.

But: you can not run the repair when you have booted from the disk.




restart the system and hold Option+s during boot.
This will bring you into single user mode.

then run -sbin-fsck -fy and enter exit afterwards.

Tuesday, November 4, 2008

PostgreSQL Replication and Load-Balancing

The task was to set up a number of PostgreSQL systems that should always have identical data so one can do load-balancing.

I found the tool pgpool-II (http://pgfoundry.org/projects/pgpool/) quite useful for this.
Unluckily Debian GNU/Linux only has Version 1.3 in its repositories (lenny).
I took the sources of Version 2.1 and created a new package.

Simple load-balancing and replication is quite easy and straight forward. But...
what to do if one nodes gets broken or needs the database to be reinitialized?



Even here pgpool-II offers lots of help by making use of the PITR (point in time recovery) functionality of PostgreSQL.

I came up with the following solution:

1. Install first node with postgres and other necessary packages:

apt-get install postgresql-8.3 postgresql-client-8.3
postgresql-client-common postgresql-common postgresql-contrib-8.3

postgresql-server-dev-8.3 make

2. create a path for the PITR archive logs (we need them later for recovery)

mkdir /var/lib/postgresql-archive
chown postgres. /var/lib/postgresql-archive

3. ssh-keys for postgresql user

create ssh-keys without passphrase for postgresql user and distribute the key as authorizedkeys file to all nodes.
Hint: we need to copy some data over the filesystem without interaction or login.

4. make changes to the postgresql configuration file

/etc/postgresql/8.3/main/postgresql.conf

listenaddresses = '*'

archivemode = on

archivecommand = 'test ! -f /var/lib/postgresql-archive/%f && cp %p /var/lib/postgresql-archive/%f'

archive_timeout = 60

5. make changes to the pgpool configuration file

listen_addresses = '*'

replication_mode = true

load_balanced_mode = true

pgpool2_hostname = 'localhost'

recovery_timeout = 90

replication_timeout = 5000

backend_hostname0 = 'postgres-1'

backend_port0 = 5432

backend_weight = 1

backend_data_directory0 = '/var/lib/postgresql/8.3/main/'

backend_hostname1 = 'postgres-2'

backend_port1 = 5432

backend_weight = 1

backend_data_directory1 = '/var/lib/postgresql/8.3/main/

[...]

recovery_user = 'postgres'

recovery_password =

recovery_1st_stage_command = 'copy-base-backup'

recovery_2nd_stage_command = 'pgpool-recovery-pitr'

6. make changes to the configuration file for the pgpool control processor

/etc/pcp.conf

use the comand pg_md5 to generate md5 hashes of passwords.

pg_md5 <password>

add entries to the configuration file:

<username>:<md5 hash of password>

7. create 1st stage backup script in /var/lib/postgresql/8.3/main/copy-base-backup

#!/bin/sh

datadir=$1 DEST=$2 DESTDIR=$3

# switch master to prepare for backup

psql -c "select pg_start_backup('pgpool-recovery')" postgres

# prepare local command for archive fetching from master warning! scp hostname should be different for every system!

echo "restore_command = 'scp postgres-1:/var/lib/postgresql-archive/%f %p'" > /var/lib/postgresql/8.3/main/recovery.conf

# create complete tarball on master

tar
-C /var/lib/postgresql/8.3 -czf main.tar.gz main/global main/base
main/pg_multixact main/pg_subtrans main/pg_clog main/pg_xlog
main/pg_twophase main/pg_tblspc main/recovery.conf
main/backup_label.old

# switch master back to normal operation

psql -c 'select pg_stop_backup()' postgres

# copy tarball to destination

scp main.tar.gz $DEST:/var/lib/postgresql/8.3/

# last line

8. create 2nd stage recovery script /var/lib/postgresql/8.3/main/pgpool_recovery_pitr

#! /bin/sh

psql -c 'select pg_switch_xlog()' postgres

# last line

9. create post restore initialization script /var/lib/postgresql/8.3/main/pgpool_remote_start

#! /bin/sh

if [ $# -ne 2 ]

then

  • echo "pgpool_remote_start remote_host remote_datadir" exit 1

fi

DEST=$1

DESTDIR=$2

PGCTL=/usr/lib/postgresql/8.3/bin/pg_ctl

ssh -T $DEST $PGCTL -w -D /var/lib/postgresql/8.3/main/ stop 2>/dev/null 1>/dev/null < /dev/null

# delete old content

ssh
-T $DEST 'cd /var/lib/postgresql/8.3/; rm -r main/global main/base
main/pg_multixact main/pg_subtrans main/pg_clog main/pg_xlog
main/pg_twophase main/pg_tblspc main/recovery.conf
main/backup_label.old'

# expand the archive on the remote system

ssh -T $DEST 'cd /var/lib/postgresql/8.3/; tar zxf main.tar.gz' 2>/dev/null 1>/dev/null < /dev/null

# restart postgresql on the remote system

ssh -T $DEST $PGCTL -w -D /etc/postgresql/8.3/main/ start 2>/dev/null 1>/dev/null < /dev/null &

# last line

10. prepare databases

log in to first database node.

stop postgresql

cd /var/lib/postgresql/8.3/; tar zxf main.tar.gz main

stop postgresql on all other nodes

copy main.tar.gz to all other nodes

extract main.tar.gz on all other nodes

start postgresql on all nodes

11. install pgpool2 Admin interface (web based)

fetch pgpoolAdmin sources from http://pgfoundry.org/projects/pgpool/

apt-get install apache2 libapache2-mod-php5 php5-pgsql

copy content to /var/www/pgpooladmin

open a browser: http://<nodename>/pgpooladmin/install/

follow the setup description (name proper paths to files, set permissions)

12. prepare system for pgpool in combination with PITR on line recovery

The postgresql on-line recovery makes use of the PITR (point in time recovery).

Postgresql writes all transactions to a log (/var/lib/postgresql-archive) and remembers last made transaction.

To make use of pgpool and on-line recovery the template1 database needs to have the C language function installed.

cd /root/pgpool2-2.1/sql/pgpool-recovery

make install

su - postgres

cd /root/pgpool2-2.1/sql/pgpool-recovery

psql -f pgpool-recovery.sql template1


13. recovery

do never run the recovery on the host that you like to recover!!!!

log in to a functional node and run the following commands:

pcp_detach_node 10 localhost 9898 <username> <password> <number of node to recover>

pcp_recovery_node 10 localhost 9898 <username> <password> <number of node to recover>

afterwards log in to the web-admin interfaces on all nodes and enable the recovered system

or log in to all other nodes and run

pcp_attach_node 10 localhost <username> <password> <number of node to attach>