Archivos de la categoría ‘trick’

I never liked have to install agents for different tasks like Backups or monitoring. I think that is always enough with SSH. In this post I will introduce some concepts that I am using as an alternative to the NRPE for nagios.

Time ago I explained how to setup SSH for remote monitor servers in Nagios, using the ControlMaster feature to reuse the connection.

In that post I was using runit to keep the connections alive.

But in OpenSSH 5.6 a new feature has been released:

* Added a ControlPersist option to ssh_config(5) that automatically
starts a background ssh(1) multiplex master when connecting. This
connection can stay alive indefinitely, or can be set to
automatically close after a user-specified duration of inactivity.

And this is COOL! We can just use some options in the check_by_ssh plugin to automatically create the session. The options are:

  • -i /etc/nagios/nagiosssh.id_rsa: Private ssh key generated with ssh-keygen.
  • -o ControlMaster=auto: Create the control master socket automatically
  • -o ControlPersist=yes: Enable Control persist. It will spam a ssh process in background that will keep the connection (can be stopped with -O exit)
  • -o ControlPath=/var/run/nagiosssh/$HOSTNAME$: Path to the control socket. We can create a dir in /var/run/nagiosssh.
  • -l nagiosssh -H $HOSTNAME$: User and host were we are connecting.

So, the command definition can be:


define command{
command_name    check_users_ssh
command_line    $USER1$/check_by_ssh \
-o ControlMaster=auto \
-o ControlPath=/var/run/nagios/$HOSTNAME$ \
-o ControlPersist=yes \
-i $USER6$ -H $HOSTADDRESS$ -l $USER5$ \
'check_users -w $ARG1$ -c $ARG2$'
}

Note: You have to define the USER variables in resources.cfg.

Then we only need to create the proper user in the remote host. To improve the security, you can:

  • Use bash in restricted mode:
    1. Create the user ‘nagiosssh’ with shell=/home/nagiosssh/rbash
    2. Create a script /home/nagiosssh/rbash:
      #!/bin/sh
      # Restricted shell for the client.
      # Sets the path to checks
      PATH=/home/icingassh/checks exec /bin/bash --restricted "$@"
    3. Create the directory /home/icingassh/checks  and link here all the desired checks.
  • Restrict the ssh connection setting options in .ssh/authorized_keys. For example:

    no-agent-forwarding,no-port-forwarding,no-pty,no-X11-forwarding,from="10.10.10.10" ssh-rsa AAAAB3NzaC1...

Maybe in some days I upload a chef recipe to setup this.

These is one of the proposed solutions for the job assessment commented in a previous post.

Provide a script manipulating the routing table to send all outgoing traffic originating from ipaddress: 85.14.228.248 through gw 85.14.228.254 and all other traffic through gateway 10.12.0.254

One basically have to execute these commands:

# Default route
ip route del default table 254
ip route add default via 192.168.1.1 dev wlan0 table 254

# alternative route and its rule
ip route del default table 1
ip route add default via 85.14.228.254 dev wlan0 table 1
ip rule del from 85.14.228.248
ip rule add from 85.14.228.248 table 1

I delete the previous default route and rule to ensure that the commands will be a success and will update the configuration.

A more convenient script could be:

#!/bin/bash

# Default route
DEFAULT_ROUTE=10.12.0.254
DEFAULT_DEV=eth0

# Create the diferent routes, where
#  NAME[id]  = name of routing table (for documentation purposes)
#  ROUTE[id] = destination
#  SRCS[id]  = list of routed ips
#  DEV[id]     = network device
#  id          = number of routing table (1..253)
#
NAME[1]=uplink1
ROUTE[1]="85.14.228.254"
DEV[1]=eth1
SRCS[1]="85.14.228.248"

#-----------------------------------------
# Set the "main" table
NAME[254]=main
ROUTE[254]=$DEFAULT_ROUTE
DEV[254]=$DEFAULT_DEV

# debug
#ip() { echo "> ip $*"; command ip $*; }

for i in {255..1}; do
[ ! -z "${ROUTE[$i]}" ] || continue

# Delete default route if exists
ip route list table $i | grep -q default && \
echo "Deleting default entry for route table ${NAME[$i]}($i)..." && \
ip route del default table $i

# Create the new table default route
echo "Creating route table '${NAME[$i]}($i)' with default via gw ${ROUTE[$i]}"
ip route add default via "${ROUTE[$i]}" dev ${DEV[$i]} table $i || continue

# Create
for ip in ${SRCS[i]}; do
# Delete rule if exists
ip rule list |grep -q "from $ip" && \
echo " - deleting rule from $ip..." && \
ip rule del from $ip

# Add the source rule
echo " + adding rule from $ip..."
ip rule add from $ip table $i
done
done

Based to this post, http://linux-tips.org/article/78/syntax-highlighting-in-less my fast tip to allow syntax highlight in less:

cat <<EOF >> ~/.bash_profile

# Syntax Highlight for less
#
# Check if source highlite is intalled http://www.gnu.org/software/src-highlite/
# Set SRC_HILITE_LESSPIPE for custom location
# 
# To install: 
#   sudo yum install source-highlight
#
SRC_HILITE_LESSPIPE=${SRC_HILITE_LESSPIPE:-$(which src-hilite-lesspipe.sh 2> /dev/null)}
if [ -x "$SRC_HILITE_LESSPIPE" ]; then
	export LESSOPEN="| $SRC_HILITE_LESSPIPE  %s"
	export LESS="${LESS/ -R/}  -R" # Set LESS option in raw mode (avoid repeating it)
fi
EOF

For the last years I had the same problem: I was running windows as desktop and managing Linux/Unix?. Of cuorse, to minimize the pain, I use Cygwin and/or colinux, that make my life easier.

Often I need to open files remotely, but is so tedious to find them in the samba share… and then I’ve found this tool: doit  http://www.chiark.greenend.org.uk/~sgtatham/doit/, from Simon Tatham, the putty author.

It allows execute commands in your box from the remote server, automatically translating the paths (in case you are using samba).

Fast installation

  1. Client on Unix side (1):
  1. Download and compile:
    curl http://www.chiark.greenend.org.uk/~sgtatham/doit/doit.tar.gz | tar -xvzf -
    cd doit
    cc -o doitclient doitclient.c doitlib.c -lsocket -lnsl -lresolv
    
  2. Install. I use  stowfor my adhoc binaries:
    ##  Preset variables.
    LOCAL_BINARIES=~/local
    PLATFORM="$(uname -s)-$(uname -p)"
    PATH=$PATH:$LOCAL_BINARIES/$PLATFORM/bin
    
    STOW_HOME=$LOCAL_BINARIES/$PLATFORM/stow
    
    mkdir -p $STOW_HOME/doit/bin
    cp doitclient $STOW_HOME/doit/bin
    for i in wf win winwait wcmd wclip www wpath; do
     ln -s doitclient $STOW_HOME/doit/bin/$i
    done
    
    cd $STOW_HOME
    stow doit
    
  1. Shared secret setup and configuration:
    dd if=/dev/random of=$HOME/.doit-secret bs=64 count=1
    chmod 640 $HOME/.doit-secret 
    echo "secret $HOME/.doit-secret" &gt; $HOME/.doitrc
    

    Then set the mappings as described in the documentation. For instance:

    host
      map /home/ \\sambaserver\
  1. If you are using su (or sudo reseting passwords), you will lose the SSH_CLIENT variable. But can set the $DOIT_HOST variable. You can use this:
    cat <<"EOF" >> ~/.bashrc
    # DOIT_HOST variable, for the DoIt tool (Integration with windows desktops)
    export DOIT_HOST=$(who -m | sed 's/.*(\(.*\)).*/\1/')
    EOF
    
  2. Setup the client on a windows box. You can copy the .doit-secret or use samba to access to your home.
    Just create a link to “doit.exe secret.file”, for instance:

    \\sambaserver\keymon\local\Linux-x86\stow\doit\doit.exe \\sambaserver\keymon\.doit-secret

Conclusions

It is really cool, and it really works.

My only concern is that is the key, that should be shared. One solution can be use environment vars or event the Putty ‘Answerback to ^E’ ( http://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter4.html#config-answerback), but I am not sure how implement it.

(1) On solaris, compiling with GCC, I got this error:

/var/tmp//cc5ZYGYW.o: In function `main':
doitclient.c:(.text+0x29e8): undefined reference to `hstrerror'
collect2: ld returned 1 exit status

This is solved solved adding -lsocket.

If you are behind a proxy that allows HTTPS connections, you can use github via SSH without problems. To do so, you have to use the great tool connect.c ( http://bent.latency.net/bent/git/goto-san-connect-1.85/src/connect.html). As described in its homepage, this program tunnels a connection using a proxy, to allow SSH to connect to servers using a proxy.

You can configure connect as the ProxyCommand for ssh.github.com and github.com hosts in ~/.ssh/config. You can also set the Port to 443 aswell.

Basicly the process will be:

export PROXY=proxy:80

http_proxy=http://$PROXY wget http://www.taiyo.co.jp/~gotoh/ssh/connect.c -O /tmp/connect.c
gcc /tmp/connect.c -o ~/bin/connect 

cat >> ~/.ssh/config  <<EOF

Host ssh.github.com github.com
  Port 443
  HostName ssh.github.com
  IdentityFile $HOME/.ssh/id_rsa
  ProxyCommand $HOME/bin/connect -H proxy:80 %h %p

EOF

And ready!!

git clone git@github.com:keymon/facter.git facter

Easy, isn’t it?

Check connect.c documentation if you need to use an authenticated user in proxy.

When you create a “Virtual Target Disk” or VTD on a VIOS, there is not documented way to define or change the LUN number that it shows to the client partition. But there are situation where you might need to update it:
  1. In a dual VIOS environment, to have the same LUNs in both clients (easier to administrate)-
  2. In a redudant configuration, when you need to start lpars on different hardware, using SAN disks. For instance, we use this configuration for our Backup datacenter where we have al the SAN disks mirrored.

In this post I comment how to update this LUN. The idea is basicly:

  • Set the VTD device to Defined in the VIOS
  • Update the ODM database. You have to update the attribute ‘LogicalUnitAddr’ in ObjectClass ‘CuAt’
  • Perform a ‘cfgmgr’ on the virtual host adapter (vhostX). This will enable the VTD device and reload the LUN number. Perform an cfgmgr on the VTD device does not work.

So, with commands:

$ oem_setup_env
# bash

# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.BBBBBBB-P1-C3-T2-L75

# ioscli mkvdev -vadapter vhost21 -dev host01v99 -vdev hdiskpower1
cfgmgr -l vhost21

# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L75

VTD                   host01v99
Status                Available
LUN                   0x8300000000000000
Backing device        hdiskpower1
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L77

# rmdev -l host01v99
host01v99 Defined

# odmget -q "name=host01v99 and attribute=LogicalUnitAddr"  CuAt
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8300000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6

# odmchange -o CuAt -q "name = host01v99 and attribute = LogicalUnitAddr" <<"EOF"
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8100000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6
EOF

# odmget -q "name=host01v99 and attribute=LogicalUnitAddr"  CuAt
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8100000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6

# cfgmgr -l vhost21
# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L75

VTD                   host01v99
Status                Available
LUN                   0x8100000000000000
Backing device        hdiskpower1
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L77

In the client partition, you can scan for the new disk, and it will have the LUN 0x81:

root@host01:~/# cfgmgr -l vio0
root@host01:~/# lscfg -vl hdisk5
  hdisk5           U9117.MMA.XXXXXXX-V22-C3-T1-L8100000000000000  Virtual SCSI Disk Drive

Note: Actually I changed the output of these commands to remove information of my company.

Update: I created an script to do this: change_vtd_lun.sh

How to define “hotkeys” in bash

Publicado: septiembre 20, 2010 en fast-tip, linux/unix, sysadmin, trick
Etiquetas:, , ,

For instance, I will define a hotkey to get manual page of current command without execute it (ideal for F1).

First, you get the code of the “hotkey” you want to use by pressing “Ctrl+U+<hotkey>”. For example:

* Ctrl+L: ^L
* Ctrl+J: ^J
* F1: ^[OP

This code may vary from terminal to terminal.

First you define an function, called single-man, to execute man of the first argument:

single-man() { man $1; }

Then, you add a line like this one in your .inputrc:

 "^[OP" "\C-A\C-K single-man \C-Y\C-M\C-Y"

What the hell does this? Well, when “F1” is pressed, in will simulate the press of “Ctrl+A”, that goes to the begining of the line, “Ctrl+K” that copies current line to clipboard, “Ctrl+Y” that pastes the clipboard, “Ctrl+M” that press Enter and Ctrl+Y that pastes the clipboard one more time.

I use this trick since several years ago.

I will describe my configuration of puppetmaster server and puppet dashboard server running into the Debian’s Apache installation, but:

  • Using a custom or different user, not www-data and root. This is good to keep all puppet configuration and data with a different user than www-data and root.
  • Using a custom configuration directory, not default apache directory: /etc/apache2
  • You can issolate puppet server from the rest of apache applications.

First I recommend you to read the official documentation:

It is supposed that you have a running puppet installation.

(más…)

Puppet architecture needs a client to connect to the server to load the configuration usin a pull schema. But I do not like to have more and more daemons around and some people suggest avoid that , so I decided to execute puppet using ‘–onetime’ option from cron.

Obviously, I want to configure this using puppet itself. And we must ensure that the clients are executed at different times, not all at the same minute.

I searched the net and I found several aproaches to do this. There are also feature requests.

I read somewhere that the new function fqdn_rand() could be used, as proposed in the feature request and posted in this mail from Brice Figureau. I can not find where the hell the snippet was. At the end, I found this pastie by Jhon Goebel.

I will post my version here just to keep it wrote down.

 $first = fqdn_rand(30)
 $second = fqdn_rand(30) + 30
 cron { "cron.puppet.onetime":
 command => "/srv/scripts/puppet/puppet.ctl.sh onetime > /dev/null",
 user => "root",
 minute => [ $first, $second ],
 require => File["/srv/scripts/puppet/puppet.ctl.sh"],
 }

… this is another random thinking from keymon (https://keymon.wordpress.com)


									

Un amigo mío me comenta que necesitaba ampliar o incrementar el disco de un hosting en internet, y me preguntaba cual sería la mejor forma. Obviamente hay que minimizar el tiempo de caída.

Allí, lo único que le hicieron fué ampliar el disco iSCSI asignado en 50GB. El disco está particionado en 2 particiones, una de boot y otra de datos, y quiere ampliar la de datos (ficheros en fs en ext3) con la mínima disrupción. No usa LVM.

La empresa de hosting le propone reiniciar en modo “administración” (con una imagen en red) y borrar y crear de nuevo la partición, para luego redimensionar el fs con resize2fs… Pero no se lo recomiendo porque:

  • Es una perdida de servicio muy grande.
  • Cada vez que amplie tendrá que reiniciar.
  • no me fio de resize2fs, ya me falló con anterioridad.

Yo le propongo que se pase a LVM + xfs. E incluso puede hacerlo sin reiniciar el servidor, en caliente, y con una parada de servicio mínima (<1min). En este post comento el procedimiento con comandos simples y disponibles en practiamente todas las distribuciones.

El proceso seria:

  1. Hacer backup. Siempre.
  2. Reescanear buses y discos scsi para detectar nuevo tamaño de disco.
  3. Reparticionar para crear una nueva partición con el nuevo espacio. Es mejor extendida, para poder ampliar en futuras ocasiones.
  4. Configurar una LV con LVM en el nuevo espacio (pvcreate, vgcreate, lvcreate).
  5. Montar y clonar los datos con rsync.
  6. Parar el servicio, resincronizar los últimos cambios con rsync, intercambiar el punto de montaje y arrancar el servicio.

En el paso 2 nos encontramos un problema. Linux, al ser el mismo de boot y estar montado no va a recargar la tabla al salir del fdisk. Pero por lo visto el comando partprobe, que viene con parted, es capaz de crear las nuevas particiones aún usando ese disco :).

Así que simplemente los pasos son:

  1. Decirle al hosting que incremente el disco.
  2. Reescanear las fibras con este sencillo script:
    cat > reescanea-scsi <<"EOF"
    #!/bin/bash
    
    for fn in /sys/class/scsi_host/*
    do
            host=$(basename $fn)
            echo "Scanning $host ... "
            if [ -d $fn ]; then
                    echo "- - -" > /sys/class/scsi_host/$host/scan
            else
                    echo "ERROR, device not found : '$fn'"
            fi
    done
    
    for disk in /sys/class/scsi_device/*/device/rescan; do
            echo "Rescanning device $disk ..."
            echo 1 > "$disk"
    done
    
    exit 0
    EOF
    chmod +x reescanea-scsi
    ./reescanea-scsi
    

    La salida será algo así. Vemos que nos cambia el tamaño:

    # ./reescanea-scsi
    Scanning host0 ...
    Scanning host1 ...
    Scanning host2 ...
    Rescanning device /sys/class/scsi_device/0:0:0:0/device/rescan ...
    Rescanning device /sys/class/scsi_device/0:0:2:0/device/rescan ...
    Rescanning device /sys/class/scsi_device/0:0:3:0/device/rescan ...
    Rescanning device /sys/class/scsi_device/1:0:0:0/device/rescan ...
    # dmesg|grep sda 
    sd 0:0:0:0: [sda] 20971520 512-byte hardware sectors: (10.7GB/10.0GiB)
    sd 0:0:0:0: [sda] Test WP failed, assume Write Enabled
    sd 0:0:0:0: [sda] Cache data unavailable
    sd 0:0:0:0: [sda] Assuming drive cache: write through
    sd 0:0:0:0: [sda] 20971520 512-byte hardware sectors: (10.7GB/10.0GiB)
    sd 0:0:0:0: [sda] Test WP failed, assume Write Enabled
    sd 0:0:0:0: [sda] Cache data unavailable
    sd 0:0:0:0: [sda] Assuming drive cache: write through
     sda: sda1 sda2 sda3
    sd 0:0:0:0: [sda] Attached SCSI disk
    Adding 1052248k swap on /dev/sda2.  Priority:1 extents:1 across:1052248k
    EXT3 FS on sda1, internal journal
    sd 0:0:0:0: [sda] 23068672 512-byte hardware sectors: (11.8GB/11.0GiB)
    sd 0:0:0:0: [sda] Write Protect is off
    sd 0:0:0:0: [sda] Mode Sense: 03 00 00 00
    sd 0:0:0:0: [sda] Cache data unavailable
    sd 0:0:0:0: [sda] Assuming drive cache: write through
    sda: detected capacity change from 10737418240 to 11811160064
    
  3. Creamos la partición extendida con fdisk (o otro similar):
    # fdisk /dev/sda
    
    The number of cylinders for this disk is set to 1435.
    There is nothing wrong with that, but this is larger than 1024,
    and could in certain setups cause problems with:
    1) software that runs at boot time (e.g., old versions of LILO)
    2) booting and partitioning software from other OSs
       (e.g., DOS FDISK, OS/2 FDISK)
    
    Command (m for help): p
    
    Disk /dev/sda: 11.8 GB, 11811160064 bytes
    255 heads, 63 sectors/track, 1435 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x000a4c74
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14         144     1052257+  82  Linux swap / Solaris
    /dev/sda3             145        1305     9325732+  8e  Linux LVM
    
    Command (m for help): n
    Command action
       e   extended
       p   primary partition (1-4)
    e
    Selected partition 4
    First cylinder (1306-1435, default 1306):
    Using default value 1306
    Last cylinder, +cylinders or +size{K,M,G} (1306-1435, default 1435):
    Using default value 1435
    
    Command (m for help): p
    
    Disk /dev/sda: 11.8 GB, 11811160064 bytes
    255 heads, 63 sectors/track, 1435 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x000a4c74
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14         144     1052257+  82  Linux swap / Solaris
    /dev/sda3             145        1305     9325732+  8e  Linux LVM
    /dev/sda4            1306        1435     1044225    5  Extended
    
    Command (m for help): n
    First cylinder (1306-1435, default 1306):
    Using default value 1306
    Last cylinder, +cylinders or +size{K,M,G} (1306-1435, default 1435):
    Using default value 1435
    
    Command (m for help): p
    
    Disk /dev/sda: 11.8 GB, 11811160064 bytes
    255 heads, 63 sectors/track, 1435 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x000a4c74
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          13      104391   83  Linux
    /dev/sda2              14         144     1052257+  82  Linux swap / Solaris
    /dev/sda3             145        1305     9325732+  8e  Linux LVM
    /dev/sda4            1306        1435     1044225    5  Extended
    /dev/sda5            1306        1435     1044193+  83  Linux
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table.
    The new table will be used at the next reboot.
    Syncing disks.
    
    dcsrvmonits1:/home/invitado # ls /dev/sda5
    ls: cannot access /dev/sda5: No such file or directory
    

    Observamos cómo falla la ioctl de recarga de particiones y no detecta la nueva partición… Probamos con partprobe:

    dcsrvmonits1:/home/invitado # partprobe /dev/sda
    dcsrvmonits1:/home/invitado # ls /dev/sda5
    /dev/sda5
    

    Así funciona. Cosa curiosa, no sale ningún mensaje en dmesg.

  4. Configuramos LVM… mirate el manual para saber más:
    # pvcreate /dev/sda5
    File descriptor 5 left open
      No physical volume label read from /dev/sda5
      Physical volume "/dev/sda5" successfully created
    # pvdisplay
    File descriptor 5 left open
      "/dev/sda5" is a new physical volume of "1019.72 MB"
      --- NEW Physical volume ---
      PV Name               /dev/sda5
      VG Name
      PV Size               1019.72 MB
      Allocatable           NO
      PE Size (KByte)       0
      Total PE              0
      Free PE               0
      Allocated PE          0
      PV UUID               LpvfCq-gzrR-tjC5-N3E2-dA6x-Hmoi-UlPzBK
    
    # vgcreate datavg /dev/sda5
    File descriptor 5 left open
      Volume group "datavg" successfully created
    # vgdisplay datavg
    File descriptor 5 left open
      --- Volume group ---
      VG Name               datavg
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  1
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                0
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               1016.00 MB
      PE Size               4.00 MB
      Total PE              254
      Alloc PE / Size       0 / 0
      Free  PE / Size       254 / 1016.00 MB
      VG UUID               1wC2Vb-omIq-zpDJ-pnUg-oU2f-HaXP-sp29XD
    
    # lvcreate -n reposlv datavg -L 1016.00M
    File descriptor 5 left open
      Logical volume "reposlv" created
    # lvdisplay
      --- Logical volume ---
      LV Name                /dev/datavg/reposlv
      VG Name                datavg
      LV UUID                bIrslV-vSlB-elpP-no2v-B1yt-FO2G-CMjq9l
      LV Write Access        read/write
      LV Status              available
      # open                 0
      LV Size                1016.00 MB
      Current LE             254
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:7
    
  1. Listo, formateamos, montamos y sincronizamos:
    # mkfs.xfs /dev/datavg/reposlv
    meta-data=/dev/datavg/reposlv    isize=256    agcount=4, agsize=65024 blks
             =                       sectsz=512   attr=2
    data     =                       bsize=4096   blocks=260096, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0
    log      =internal log           bsize=4096   blocks=1200, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=0
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    # mkdir /mnt/repos.new
    # mount /dev/datavg/reposlv /mnt/repos.new
    

    Clonamos:

    # rsync -av --delete /mnt/repos/ /mnt/repos.new
    
  2. Paramos un segundo el servicio, volvemos a sincronizar, damos el cambiazo y arrancamos el servicio. Se puede hacer en un script de una tacada:
    apachectl stop
    rsync -av --delete /mnt/repos/ /mnt/repos.new
    umount /mnt/repos
    umount /mnt/repos.new
    mount /dev/datavg/reposlv /mnt/repos
    apachectl start
    # Actualiza el fstab
    sed -i 's|/dev/sda2|/dev/datavg/reposlv|' /etc/fstab
    
  3. Por último, despues de comprobar que todo está ok, agregamos el viejo espacio al VG y aumentamos así el LV. En caliente :):
    pvcreate /dev/sda2
    vgextend datavg /dev/sda2
    lvextend -l FREE /dev/datavg/reposlv
    xfs_growfs /dev/datavg/reposlv
    

Simple ¿no?