Tuesday, November 4, 2008

PostgreSQL Replication and Load-Balancing

The task was to set up a number of PostgreSQL systems that should always have identical data so one can do load-balancing.

I found the tool pgpool-II (http://pgfoundry.org/projects/pgpool/) quite useful for this.
Unluckily Debian GNU/Linux only has Version 1.3 in its repositories (lenny).
I took the sources of Version 2.1 and created a new package.

Simple load-balancing and replication is quite easy and straight forward. But...
what to do if one nodes gets broken or needs the database to be reinitialized?



Even here pgpool-II offers lots of help by making use of the PITR (point in time recovery) functionality of PostgreSQL.

I came up with the following solution:

1. Install first node with postgres and other necessary packages:

apt-get install postgresql-8.3 postgresql-client-8.3
postgresql-client-common postgresql-common postgresql-contrib-8.3

postgresql-server-dev-8.3 make

2. create a path for the PITR archive logs (we need them later for recovery)

mkdir /var/lib/postgresql-archive
chown postgres. /var/lib/postgresql-archive

3. ssh-keys for postgresql user

create ssh-keys without passphrase for postgresql user and distribute the key as authorizedkeys file to all nodes.
Hint: we need to copy some data over the filesystem without interaction or login.

4. make changes to the postgresql configuration file

/etc/postgresql/8.3/main/postgresql.conf

listenaddresses = '*'

archivemode = on

archivecommand = 'test ! -f /var/lib/postgresql-archive/%f && cp %p /var/lib/postgresql-archive/%f'

archive_timeout = 60

5. make changes to the pgpool configuration file

listen_addresses = '*'

replication_mode = true

load_balanced_mode = true

pgpool2_hostname = 'localhost'

recovery_timeout = 90

replication_timeout = 5000

backend_hostname0 = 'postgres-1'

backend_port0 = 5432

backend_weight = 1

backend_data_directory0 = '/var/lib/postgresql/8.3/main/'

backend_hostname1 = 'postgres-2'

backend_port1 = 5432

backend_weight = 1

backend_data_directory1 = '/var/lib/postgresql/8.3/main/

[...]

recovery_user = 'postgres'

recovery_password =

recovery_1st_stage_command = 'copy-base-backup'

recovery_2nd_stage_command = 'pgpool-recovery-pitr'

6. make changes to the configuration file for the pgpool control processor

/etc/pcp.conf

use the comand pg_md5 to generate md5 hashes of passwords.

pg_md5 <password>

add entries to the configuration file:

<username>:<md5 hash of password>

7. create 1st stage backup script in /var/lib/postgresql/8.3/main/copy-base-backup

#!/bin/sh

datadir=$1 DEST=$2 DESTDIR=$3

# switch master to prepare for backup

psql -c "select pg_start_backup('pgpool-recovery')" postgres

# prepare local command for archive fetching from master warning! scp hostname should be different for every system!

echo "restore_command = 'scp postgres-1:/var/lib/postgresql-archive/%f %p'" > /var/lib/postgresql/8.3/main/recovery.conf

# create complete tarball on master

tar
-C /var/lib/postgresql/8.3 -czf main.tar.gz main/global main/base
main/pg_multixact main/pg_subtrans main/pg_clog main/pg_xlog
main/pg_twophase main/pg_tblspc main/recovery.conf
main/backup_label.old

# switch master back to normal operation

psql -c 'select pg_stop_backup()' postgres

# copy tarball to destination

scp main.tar.gz $DEST:/var/lib/postgresql/8.3/

# last line

8. create 2nd stage recovery script /var/lib/postgresql/8.3/main/pgpool_recovery_pitr

#! /bin/sh

psql -c 'select pg_switch_xlog()' postgres

# last line

9. create post restore initialization script /var/lib/postgresql/8.3/main/pgpool_remote_start

#! /bin/sh

if [ $# -ne 2 ]

then

  • echo "pgpool_remote_start remote_host remote_datadir" exit 1

fi

DEST=$1

DESTDIR=$2

PGCTL=/usr/lib/postgresql/8.3/bin/pg_ctl

ssh -T $DEST $PGCTL -w -D /var/lib/postgresql/8.3/main/ stop 2>/dev/null 1>/dev/null < /dev/null

# delete old content

ssh
-T $DEST 'cd /var/lib/postgresql/8.3/; rm -r main/global main/base
main/pg_multixact main/pg_subtrans main/pg_clog main/pg_xlog
main/pg_twophase main/pg_tblspc main/recovery.conf
main/backup_label.old'

# expand the archive on the remote system

ssh -T $DEST 'cd /var/lib/postgresql/8.3/; tar zxf main.tar.gz' 2>/dev/null 1>/dev/null < /dev/null

# restart postgresql on the remote system

ssh -T $DEST $PGCTL -w -D /etc/postgresql/8.3/main/ start 2>/dev/null 1>/dev/null < /dev/null &

# last line

10. prepare databases

log in to first database node.

stop postgresql

cd /var/lib/postgresql/8.3/; tar zxf main.tar.gz main

stop postgresql on all other nodes

copy main.tar.gz to all other nodes

extract main.tar.gz on all other nodes

start postgresql on all nodes

11. install pgpool2 Admin interface (web based)

fetch pgpoolAdmin sources from http://pgfoundry.org/projects/pgpool/

apt-get install apache2 libapache2-mod-php5 php5-pgsql

copy content to /var/www/pgpooladmin

open a browser: http://<nodename>/pgpooladmin/install/

follow the setup description (name proper paths to files, set permissions)

12. prepare system for pgpool in combination with PITR on line recovery

The postgresql on-line recovery makes use of the PITR (point in time recovery).

Postgresql writes all transactions to a log (/var/lib/postgresql-archive) and remembers last made transaction.

To make use of pgpool and on-line recovery the template1 database needs to have the C language function installed.

cd /root/pgpool2-2.1/sql/pgpool-recovery

make install

su - postgres

cd /root/pgpool2-2.1/sql/pgpool-recovery

psql -f pgpool-recovery.sql template1


13. recovery

do never run the recovery on the host that you like to recover!!!!

log in to a functional node and run the following commands:

pcp_detach_node 10 localhost 9898 <username> <password> <number of node to recover>

pcp_recovery_node 10 localhost 9898 <username> <password> <number of node to recover>

afterwards log in to the web-admin interfaces on all nodes and enable the recovered system

or log in to all other nodes and run

pcp_attach_node 10 localhost <username> <password> <number of node to attach>

Sunday, August 17, 2008

nagios redundant master servers setup

It may be necessary to have a fail-over nagios master in case of primary system hardware failure.
The following describes the setup of nagios redundant master setup using heartbeat and nagios internal techniques.

Both nagios systems have to be set up identically in terms of monitoring items.
Nagios will be running as process on both cluster nodes.

After finishing setup the following is needed:

- both systems need to have passive checks enabled
- both systems need nsca running
- both systems will need the obsess_over_services option set to 1
- both systems will need an ocsp_command configured (submit_check_result)

The command submit_check_results needs to be configured:

define command{
        command_name    submit_check_result
        command_line    /usr/lib/nagios/libexec/eventhandlers/submit_check_result $HOSTNAME$ '$SERVICEDESC$' $SERVICESTATE$ '$SERVICEOUTPUT$'
        }

The mentioned script needs to be put into place on the primary node:
submit_check_result

Now we would need an additional start-stop script:
 /etc/init.d/nagios_notification

#!/bin/bash

case "$1" in
        "start")
                echo "Starting notifications"
                /usr/lib/nagios/libexec/eventhandlers/enable_notifications
                echo "Done"
                exit 0
        ;;
        "stop")
                echo "Stopping notifications"
                /usr/lib/nagios/libexec/eventhandlers/disable_notifications
                echo "Done"
                exit 0
        ;;
        *)
                echo "Usage: $0 [start|stop]"
        ;;
esac



Now we can start with the heartbeat setup:

put into /etc/ha.d/haressources on both clusternodes:

<clustername> \
    IPaddr::<virt ip>/cidr>/<iface>/<bcast> \
    nagios_notifications
 et voila.

We now have a basic nagios cluster where node B is informed about updates that are done on node A.
In case of hardware failure node B will take over and have notifications enabled.

Best would be to also disable active checks on node B until fail-over.

I will add an update on this.



It may be necessary to have a fail-over nagios master in case of primary system hardware failure.
The following describes the setup of nagios redundant master setup using heartbeat and nagios internal techniques.

Both nagios systems have to be set up identically in terms of monitoring items.
Nagios will be running as process on both cluster nodes.

After finishing setup the following is needed:

- both systems need to have passive checks enabled
- both systems need nsca running
- both systems will need the obsess_over_services option set to 1
- both systems will need an ocsp_command configured (submit_check_result)

The command submit_check_results needs to be configured:

define command{
        command_name    submit_check_result
        command_line    /usr/lib/nagios/libexec/eventhandlers/submit_check_resul
t $HOSTNAME$ '$SERVICEDESC$' $SERVICESTATE$ '$SERVICEOUTPUT$'
        }

The mentioned script needs to be put into place:
monitor1

Now we would need an additional start-stop script:

Now we can start with the heartbeat setup:

put into /etc/ha.d/haressources:


Saturday, August 16, 2008

example for munin-nagios integration

Let's assume that you have an USB temperature sensor connected to a machine in your server room.
Active checks would make no sense here, since most of the time the temperature should be OK.
But you want to get notified in case of cooling problems and over heating.




First you need a plugin for munin:
serverroom



Verifiy that this plugin is working prior doing anything else!
Also adopt the warn and crit values to your needs.

Now you need nsca installed on your munin master node.

Next
step is a proper munin master configuration for your system that has
the USB thermometer connected. We assume that the system has the name
intranet and that you have configured munin to make use of some
domains. intranet is located in domain intern.

contact.nagios.command /usr/sbin/send_nsca -H <nagios server IP> -c /etc/send_nsca.cfg
[intranet.intern]
        notify_alias intranet
        address <your systems IP>
        use_node_name yes

If you omit the "notify_alias" part all alarms will be sent from the given system name  plus domain appended (intranet.intern).
With the notify alias you can make sure that nagios receives the alarm for the proper system.

Now the nagios system needs to get configured.
Make sure you have nsca installed and running.
Now enable passive chacks in nagios.conf and create a service for the temperature alarm:

define service{
        use                             passive-service
        hostgroup_name                  temperature-servers
        service_description             Serverroom
                check_command                   return-ok
        }



Friday, August 1, 2008

combining nagios and munin

munin offers a nice way to collect and show system information. So munin can be easily used as a monitoring system. Munin uses simple plugins (either shell- or perl code) to gather data from systems.
nagios is well-known as an alarming system.

munin offers the possibility to name warn and critical values. In single use munin will show items that are beyond their warn and critical values by link highliting.

Additionally munin offers the possibility to make use of nagios passive checks via nsca.



nsca is a part of nagios.

first one needs to configure the munin-master to make use of the send_nsca command.
second one has to configure the nagios master to also run the nsca daemon (either via inetd or as standalone daemon)

Since nsca documentation is very simple the guys from munin made a documentation on how to combine munin and nagios.

The advantage of this is that you get information upon changes immediately.

Tuesday, July 29, 2008

Nagios - different alarm schemes for different systems

Most companies have live and development systems.
Problems on live systems should be made known to the sysadmin immediately.
Problems on development systems should not cause an email to be sent immediately.

Nagios offers possibilities to have different alarming schemes for different hosts.

1. setup a new contact group
2. setup a new generic service
3. setup two hostgroups (one for live servers, one for development systems.)
4. setup services to monitor for each hostgroup.  make use of the earlier defined service.


define contacts:

define contact{
        contact_name                    live-root
        alias                           Root live
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r
        host_notification_options       d,r
        service_notification_commands   notify-service-by-email
        host_notification_commands      notify-host-by-email
        email                           live-root@localhost
        }

define contact{
        contact_name                    devel-root
        alias                           Root devel
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r
        host_notification_options       d,r
        service_notification_commands   notify-service-by-email
        host_notification_commands      notify-host-by-email
        email                           devel-root@localhost
        }

define contactgroup

define contactgroup{
        contactgroup_name       live-admins
        alias                   Nagios live Administrators
        members                 live-root
        }

define contactgroup{
        contactgroup_name       devel-admins
        alias                   Nagios devel Administrators
        members                 devel-root
        }


define new generic service

define service{
        name                            live-service
        active_checks_enabled           1
        passive_checks_enabled          1
        parallelize_check               1
        obsess_over_service             1
        check_freshness                 0
        notifications_enabled           1
        event_handler_enabled           1
        flap_detection_enabled          1
        failure_prediction_enabled      1
        process_perf_data               1
        retain_status_information       1
        retain_nonstatus_information    1
                notification_interval           0
                is_volatile                     0
                check_period                    24x7
                normal_check_interval           5
                retry_check_interval            1
                max_check_attempts              4
                notification_period             24x7
                notification_options            w,u,c,r
                contact_groups                  live-admins
        register                        0
        }

define service{
        name                            devel-service
        active_checks_enabled           1
        passive_checks_enabled          1
        parallelize_check               1
        obsess_over_service             1
        check_freshness                 0
        notifications_enabled           1
        event_handler_enabled           1
        flap_detection_enabled          1
        failure_prediction_enabled      1
        process_perf_data               1
        retain_status_information       1
        retain_nonstatus_information    1
                notification_interval           0
                is_volatile                     0
                check_period                    24x7
                normal_check_interval           5
                retry_check_interval            1
                max_check_attempts              4
                notification_period             workhours # defined in timeperiods
                notification_options            w,u,c,r
                contact_groups                  devel-admins
        register                        0
        }

define hostgroups

define hostgroup {
        hostgroup_name  live-servers
                alias           Live Systems
                members         live-server-1
        }

define hostgroup {

        hostgroup_name  devel-servers

                alias           Devel Systems

                members         devel-server-1

        }

define service

define service {
        hostgroup_name                  live-servers
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        use                             generic-service
        notification_interval           0 ; set > 0 if you want to be renotified
}

define service {
        hostgroup_name                  devel-servers
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        use                             generic-service
        notification_interval           0 ; set > 0 if you want to be renotified
}



puppet and key management

In case that you need to set up a system that has already benn managed by puppet one does need to run some additional steps.
First you need to remove the key from puppetmaster:

puppetca --clean <hostname>

Then you may set up the old system from scratch,.

After puppet startup use the puppetca command on puppet master to look and sign the new key


In case that puppetca --list will not show the new host key you need to run the following steps:

1. remove ssl-keys from puppet client
2. start puppet on client
3. run puppetca --list on puppetmaster.


Saturday, July 26, 2008

puppet automated system configuration

We use puppet for automated system configuration for some time.
The developers are naming puppet the successor of cfengine.
Puppet is written in ruby and supports the following platforms:
- Linux
- OS X
- BSD
- Solaris

I will add some notes of puppet - especially about items that took us some time to find out - in the near future.


Wednesday, May 7, 2008

OpenLDAP replication with OS X Server as Master

At work we have two OS X servers doing authentication for OS X users.

Now we thought on having the same credentials on our Debian GNU/Linux based intranet systems.


Since Apple is running an OpenLDAP compatible solution (OpenDirectory) we wanted to have local OpenLDAP replica on the interanet systems for authentication and as Address book.


First we learned that Apple did some incompatible changes to their schema which needed to be fixed by another schema (apple_fix) which has to loaded prior the apple schema in slapd.conf.


Additionally some changes to the slapd.conf file had to be necessary.


List of files you will need:



Wednesday, February 27, 2008

Xen: 4gb fixup message and rpmstrap CentOS4

We are running some Xen servers (both based upon Debian GNU/Linux 4.0).

We run CentOS 4 and Debian GNU/Linux 4.0 as guest systems.

We initialize each guest system using rpmstrap for CentOS and debootstrap for Debian.

The rpmstrap package on Debian GNU/Linux 4.0 does not provide support for CentOS 4.
Therefore we adopted some changes from the provided centos3 script.
Download file

After initializing the guest systems console and logfiles get spammed with messages regarding 4gb fixup.

Solution for 4gb fixup messages:

Within the guest system remove /lib/tls

# Copyright 2005 Progeny Linux Systems, Inc.
# Copyright 2007 iconmobile group
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# Authors: Sam Hart
# Jake Tabke
# Derrik Pates
# Juraj Bednar (x86_64 support)
# Martin Alfke (CentOS4)
suite_notes() {
cat <CentOS 4 Suite Script
---------------------
Builds a basic CentOS 4 bootstrap.
Authors: Sam Hart, Jake Tabke, Derrik Pates, Juraj Bednar
EOF
}
work_out_mirror() {
local big_mirror_list=""
case $ARCH in
i[3456]86)
big_mirror_list=$(cat <http://mirror.centos.org/centos/4/os/i386/CentOS/RPMS/
http://centos.cs.ucr.edu/centos/centos/4.1/os/i386/CentOS/RPMS/
http://ibiblio.org/pub/linux/distributions/caoslinux/centos/4/os/i386/CentOS/RPMS/
http://centos.absinet.net/centos/4.1/os/i386/CentOS/RPMS/
EOF
)
;;
x86_64)
big_mirror_list=$(cat <http://mirror.centos.org/centos/4/os/x86_64/CentOS/RPMS/
http://centos.cs.ucr.edu/centos/centos/4.1/os/x86_64/CentOS/RPMS/
http://ibiblio.org/pub/linux/distributions/caoslinux/centos/4/os/x86_64/CentOS/RPMS/
http://centos.absinet.net/centos/4.1/os/x86_64/CentOS/RPMS/
EOF
)
;;
*)
die "Arch $ARCH is unsupported"
;;
esac
set_mirrors $big_mirror_list
}
work_out_rpms() {
case $ARCH in
i[3456]86)
RPMS=$(cat <0:setup-2.5.37-1.3.noarch.rpm
1:filesystem-2.3.0-1.i386.rpm
2:basesystem-8.0-4.noarch.rpm
3:tzdata-2006g-1.EL4.noarch.rpm
4:glibc-common-2.3.4-2.25.i386.rpm
5:libgcc-3.4.6-3.i386.rpm
6:glibc-2.3.4-2.25.i686.rpm
7:mktemp-1.5-20.i386.rpm
8:termcap-5.4-3.noarch.rpm
9:libtermcap-2.0.8-39.i386.rpm
10:bash-3.0-19.3.i386.rpm
11:ncurses-5.4-13.i386.rpm
12:zlib-1.2.1.2-1.2.i386.rpm
13:info-4.7-5.i386.rpm
14:libselinux-1.19.1-7.2.i386.rpm
15:findutils-4.1.20-7.el4.1.i386.rpm
15:pcre-4.5-3.2.RHEL4.i386.rpm
16:grep-2.5.1-32.2.i386.rpm
16:words-3.0-3.noarch.rpm
17:libattr-2.4.16-3.i386.rpm
18:libacl-2.2.23-5.i386.rpm
19:cracklib-dicts-2.7-29.i386.rpm
19:cracklib-2.7-29.i386.rpm
20:libstdc++-3.4.6-3.i386.rpm
21:db4-4.2.52-7.1.i386.rpm
22:glib-1.2.10-15.i386.rpm
23:glib2-2.4.7-1.i386.rpm
24:sed-4.1.2-5.EL4.i386.rpm
25:gawk-3.1.3-10.1.i386.rpm
26:centos-release-4-4.2.i386.rpm
27:psmisc-21.4-4.1.i386.rpm
28:iproute-2.6.9-3.EL4.3.i386.rpm
29:iputils-20020927-18.EL4.3.i386.rpm
30:chkconfig-1.3.13.4-1.i386.rpm
31:e2fsprogs-1.35-12.4.EL4.i386.rpm
32:ethtool-1.8-4.i386.rpm
33:mingetty-1.07-3.i386.rpm
34:net-tools-1.60-37.EL4.8.i386.rpm
35:popt-1.9.1-18_nonptl.i386.rpm
35:readline-4.3-13.i386.rpm
36:audit-libs-1.0.14-1.EL4.i386.rpm
36:audit-1.0.14-1.EL4.i386.rpm
36:mkinitrd-4.2.1.8-1.i386.rpm
36:kernel-2.6.9-42.EL.i686.rpm
36:hotplug-2004_04_01-7.7.i386.rpm
36:libsepol-1.1.1-2.i386.rpm
36:device-mapper-1.02.07-4.0.RHEL4.i386.rpm
36:hwdata-0.146.22.EL-1.noarch.rpm
36:tar-1.14-10.RHEL4.i386.rpm
36:cpio-2.5-9.RHEL4.i386.rpm
36:gzip-1.3.3-15.rhel4.i386.rpm
36:usbutils-0.11-6.1.i386.rpm
36:lvm2-2.02.06-6.0.RHEL4.i386.rpm
36:less-382-4.i386.rpm
36:MAKEDEV-3.15.2-3.i386.rpm
36:pam-0.77-66.17.i386.rpm
36:initscripts-7.93.25.EL-1.centos4.i386.rpm
36:coreutils-5.2.1-31.4.i386.rpm
36:SysVinit-2.85-34.3.i386.rpm
36:shadow-utils-4.0.3-60.RHEL4.i386.rpm
36:udev-039-10.15.EL4.i386.rpm
36:util-linux-2.12a-16.EL4.20.i386.rpm
36:sysklogd-1.4.1-26_EL.i386.rpm
36:which-2.16-4.i386.rpm
36:module-init-tools-3.1-0.pre5.3.2.i386.rpm
36:procps-3.2.3-8.4.i386.rpm
37:beecrypt-3.1.0-6.i386.rpm
38:bzip2-libs-1.0.2-13.EL4.3.i386.rpm
39:bzip2-1.0.2-13.EL4.3.i386.rpm
40:elfutils-libelf-0.97.1-3.i386.rpm
40:binutils-2.15.92.0.2-21.i386.rpm
41:elfutils-0.97.1-3.i386.rpm
42:gdbm-1.8.0-24.i386.rpm
43:gmp-4.1.4-3.i386.rpm
44:krb5-libs-1.3.4-33.i386.rpm
45:openssl-0.9.7a-43.10.i686.rpm
46:libxml2-2.6.16-6.i386.rpm
47:python-2.3.4-14.2.i386.rpm
48:libxml2-python-2.6.16-6.i386.rpm
48:file-4.10-2.EL4.4.i386.rpm
48:perl-5.8.5-36.RHEL4.i386.rpm
48:perl-Filter-1.30-6.i386.rpm
48:patch-2.5.4-20.i386.rpm
49:rpmdb-CentOS-4.4-0.20060823.i386.rpm
49:rpm-build-4.3.3-18_nonptl.i386.rpm
49:rpm-libs-4.3.3-18_nonptl.i386.rpm
49:rpm-4.3.3-18_nonptl.i386.rpm
50:rpm-python-4.3.3-18_nonptl.i386.rpm
51:wget-1.10.2-0.40E.i386.rpm
52:python-elementtree-1.2.6-4.2.1.i386.rpm
52:python-sqlite-1.1.7-1.2.i386.rpm
52:python-urlgrabber-2.9.8-2.noarch.rpm
52:expat-1.95.7-4.i386.rpm
52:sqlite-3.3.3-1.2.i386.rpm
52:yum-2.4.3-1.c4.noarch.rpm
53:nano-1.2.4-1.i386.rpm
54:openldap-2.2.13-6.4E.i386.rpm
54:cyrus-sasl-2.1.19-5.EL4.i386.rpm
54:cyrus-sasl-md5-2.1.19-5.EL4.i386.rpm
55:libuser-0.52.5-1.el4.1.i386.rpm
56:passwd-0.68-10.1.i386.rpm
EOF
)
;;
x86_64)
RPMS=$(cat <0:setup-2.5.37-1.3.noarch.rpm
1:filesystem-2.3.0-1.x86_64.rpm
2:basesystem-8.0-4.noarch.rpm
3:tzdata-2006a-1.EL4.noarch.rpm
4:glibc-common-2.3.4-2.19.x86_64.rpm
5:libgcc-3.4.5-2.x86_64.rpm
6:glibc-2.3.4-2.19.x86_64.rpm
7:mktemp-1.5-20.x86_64.rpm
8:termcap-5.4-3.noarch.rpm
9:libtermcap-2.0.8-39.x86_64.rpm
10:bash-3.0-19.2.x86_64.rpm
11:ncurses-5.4-13.x86_64.rpm
12:zlib-1.2.1.2-1.2.x86_64.rpm
13:info-4.7-5.x86_64.rpm
14:libselinux-1.19.1-7.x86_64.rpm
15:findutils-4.1.20-7.x86_64.rpm
15:pcre-4.5-3.2.RHEL4.x86_64.rpm
16:grep-2.5.1-31.x86_64.rpm
16:words-3.0-3.noarch.rpm
17:libattr-2.4.16-3.x86_64.rpm
18:libacl-2.2.23-5.x86_64.rpm
19:cracklib-dicts-2.7-29.x86_64.rpm
19:cracklib-2.7-29.x86_64.rpm
20:libstdc++-3.4.5-2.x86_64.rpm
21:db4-4.2.52-7.1.x86_64.rpm
22:glib-1.2.10-15.x86_64.rpm
23:glib2-2.4.7-1.x86_64.rpm
24:sed-4.1.2-4.x86_64.rpm
25:gawk-3.1.3-10.1.x86_64.rpm
26:centos-release-4-3.2.x86_64.rpm
27:psmisc-21.4-4.x86_64.rpm
28:iproute-2.6.9-3.x86_64.rpm
29:iputils-20020927-18.EL4.2.x86_64.rpm
30:chkconfig-1.3.13.3-2.x86_64.rpm
31:e2fsprogs-1.35-12.3.EL4.x86_64.rpm
32:ethtool-1.8-4.x86_64.rpm
33:mingetty-1.07-3.x86_64.rpm
34:net-tools-1.60-37.EL4.6.x86_64.rpm
35:popt-1.9.1-13_nonptl.x86_64.rpm
35:readline-4.3-13.x86_64.rpm
36:audit-libs-1.0.12-1.EL4.x86_64.rpm
36:audit-1.0.12-1.EL4.x86_64.rpm
36:mkinitrd-4.2.1.6-1.x86_64.rpm
36:kernel-2.6.9-34.EL.x86_64.rpm
36:hotplug-2004_04_01-7.6.x86_64.rpm
36:libsepol-1.1.1-2.x86_64.rpm
36:device-mapper-1.02.02-3.0.RHEL4.x86_64.rpm
36:hwdata-0.146.18.EL-1.noarch.rpm
36:tar-1.14-9.RHEL4.x86_64.rpm
36:cpio-2.5-8.RHEL4.x86_64.rpm
36:gzip-1.3.3-15.rhel4.x86_64.rpm
36:usbutils-0.11-6.1.x86_64.rpm
36:lvm2-2.02.01-1.3.RHEL4.x86_64.rpm
36:less-382-4.x86_64.rpm
36:MAKEDEV-3.15.2-3.x86_64.rpm
36:pam-0.77-66.14.x86_64.rpm
36:initscripts-7.93.24.EL-1.1.centos4.x86_64.rpm
36:coreutils-5.2.1-31.2.x86_64.rpm
36:SysVinit-2.85-34.3.x86_64.rpm
36:shadow-utils-4.0.3-60.RHEL4.x86_64.rpm
36:udev-039-10.12.EL4.x86_64.rpm
36:util-linux-2.12a-16.EL4.16.x86_64.rpm
36:sysklogd-1.4.1-26_EL.x86_64.rpm
36:which-2.16-4.x86_64.rpm
36:module-init-tools-3.1-0.pre5.3.2.x86_64.rpm
36:procps-3.2.3-8.3.x86_64.rpm
37:beecrypt-3.1.0-6.x86_64.rpm
38:bzip2-libs-1.0.2-13.EL4.3.x86_64.rpm
39:bzip2-1.0.2-13.EL4.3.x86_64.rpm
40:elfutils-libelf-0.97-5.x86_64.rpm
40:binutils-2.15.92.0.2-18.x86_64.rpm
41:elfutils-0.97-5.x86_64.rpm
42:gdbm-1.8.0-24.x86_64.rpm
43:gmp-4.1.4-3.x86_64.rpm
44:krb5-libs-1.3.4-27.x86_64.rpm
45:openssl-0.9.7a-43.8.x86_64.rpm
46:libxml2-2.6.16-6.x86_64.rpm
47:python-2.3.4-14.1.x86_64.rpm
48:libxml2-python-2.6.16-6.x86_64.rpm
48:file-4.10-2.EL4.3.x86_64.rpm
48:perl-5.8.5-24.RHEL4.x86_64.rpm
48:perl-Filter-1.30-6.x86_64.rpm
48:patch-2.5.4-20.x86_64.rpm
49:rpmdb-CentOS-4.3-0.20060315.x86_64.rpm
49:rpm-build-4.3.3-13_nonptl.x86_64.rpm
49:rpm-libs-4.3.3-13_nonptl.x86_64.rpm
49:rpm-4.3.3-13_nonptl.x86_64.rpm
50:rpm-python-4.3.3-13_nonptl.x86_64.rpm
51:wget-1.10.2-0.40E.x86_64.rpm
52:centos-yumconf-4-4.5.noarch.rpm
52:yum-2.4.2-2.centos4.noarch.rpm
53:nano-1.2.4-1.x86_64.rpm
54:openldap-2.2.13-4.x86_64.rpm
54:cyrus-sasl-2.1.19-5.EL4.x86_64.rpm
54:cyrus-sasl-md5-2.1.19-5.EL4.x86_64.rpm
55:libuser-0.52.5-1.el4.1.x86_64.rpm
56:passwd-0.68-10.1.x86_64.rpm
EOF
)
;;
*)
# No clue
;;
esac
}
print_rpms() {
local rpm_list=$(echo "$RPMS" | sed "s/[[:digit:]]\+://")
echo "RPMs for suite $RPMSUITE and arch $ARCH"
for a in $rpm_list
do
echo " : $a"
done
}
install_rpms() {
install_by_pass $RPMS
}
suite_details() {
for a in $RPMS
do
echo $a
done
}