For the last years I had the same problem: I was running windows as desktop and managing Linux/Unix?. Of cuorse, to minimize the pain, I use Cygwin and/or colinux, that make my life easier.

Often I need to open files remotely, but is so tedious to find them in the samba share… and then I’ve found this tool: doit  http://www.chiark.greenend.org.uk/~sgtatham/doit/, from Simon Tatham, the putty author.

It allows execute commands in your box from the remote server, automatically translating the paths (in case you are using samba).

Fast installation

  1. Client on Unix side (1):
  1. Download and compile:
    curl http://www.chiark.greenend.org.uk/~sgtatham/doit/doit.tar.gz | tar -xvzf -
    cd doit
    cc -o doitclient doitclient.c doitlib.c -lsocket -lnsl -lresolv
    
  2. Install. I use  stowfor my adhoc binaries:
    ##  Preset variables.
    LOCAL_BINARIES=~/local
    PLATFORM="$(uname -s)-$(uname -p)"
    PATH=$PATH:$LOCAL_BINARIES/$PLATFORM/bin
    
    STOW_HOME=$LOCAL_BINARIES/$PLATFORM/stow
    
    mkdir -p $STOW_HOME/doit/bin
    cp doitclient $STOW_HOME/doit/bin
    for i in wf win winwait wcmd wclip www wpath; do
     ln -s doitclient $STOW_HOME/doit/bin/$i
    done
    
    cd $STOW_HOME
    stow doit
    
  1. Shared secret setup and configuration:
    dd if=/dev/random of=$HOME/.doit-secret bs=64 count=1
    chmod 640 $HOME/.doit-secret 
    echo "secret $HOME/.doit-secret" > $HOME/.doitrc
    

    Then set the mappings as described in the documentation. For instance:

    host
      map /home/ \\sambaserver\
  1. If you are using su (or sudo reseting passwords), you will lose the SSH_CLIENT variable. But can set the $DOIT_HOST variable. You can use this:
    cat <<"EOF" >> ~/.bashrc
    # DOIT_HOST variable, for the DoIt tool (Integration with windows desktops)
    export DOIT_HOST=$(who -m | sed 's/.*(\(.*\)).*/\1/')
    EOF
    
  2. Setup the client on a windows box. You can copy the .doit-secret or use samba to access to your home.
    Just create a link to “doit.exe secret.file”, for instance:

    \\sambaserver\keymon\local\Linux-x86\stow\doit\doit.exe \\sambaserver\keymon\.doit-secret

Conclusions

It is really cool, and it really works.

My only concern is that is the key, that should be shared. One solution can be use environment vars or event the Putty ‘Answerback to ^E’ ( http://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter4.html#config-answerback), but I am not sure how implement it.

(1) On solaris, compiling with GCC, I got this error:

/var/tmp//cc5ZYGYW.o: In function `main':
doitclient.c:(.text+0x29e8): undefined reference to `hstrerror'
collect2: ld returned 1 exit status

This is solved solved adding -lsocket.

Sometimes you need to know who was the user that did login in a linux/unix server, but after several “sudo” or “su” commands (and others programs that change the permissions) you have lost the information.

You can try to determine the user using the tty of the process tree, querying the process parents.

With this idea, I wrote this small script: whowasi.sh

#!/bin/env bash
# This scripts allows  determine the user used to login in the
# machine to run the given process.
# 
SCRIPT_NAME=$0

# Command names to be considered as login commands
LOGIN_PROGRAMS="sshd telnetd login" 

# Get all pids of the parents of a pid
get_parent_pids() {
    echo $1
    ppid=$(ps -o ppid -p $1 | awk '/[0-9]+/ {print $1}' )
    [ $ppid == 1 -o $ppid == 0 ] && return
    get_parent_pids $ppid
}

# Get users of parent process of a pid
get_parent_users() {
	get_parent_pids $1 | xargs -n1 ps -o user= -p | uniq | awk '{print $1}'
}

get_parent_users_commands() {
	get_parent_pids $1 | xargs -n1 ps -o user= -o comm= -p | uniq
}

get_parent_users_ttys() {
	get_parent_pids $1 | xargs -n1 ps -o user= -o tty= -p | uniq
}


get_firstuser_after_login() {
	cmd="egrep -B1 -m1" # Get the line before, and stop on first match
	for p in $LOGIN_PROGRAMS; do 
		cmd="$cmd -e '^(.*/)?$p\$'" 
	done
	get_parent_users_commands $1 | eval $cmd | awk '{ print $1; exit; }'
}

get_firstuser_after_root() {
	get_parent_users $1 | grep -B1 -m1 root | awk '{print $1;exit;}'
}

get_firstuser_with_tty() {
	get_parent_users_ttys $1 | grep -B1 -m1 \?  | awk '{print $1;exit;}'
}


print_help() {
	cat <<EOF
Usage $SCRIPT_NAME [Option...] [pid]

Prints the users that where used to start a process.

By default it will use the current process.
	
Options
	-h:		This help.
	-t:		Print the user of the first process having a valid tty (not ?) 
			This is the default behaviour.
	-a:		Print all processes.
	-r:		Print only user started after the first root (usually the one that login in)
	-l:		Print the user after a login program ($LOGIN_PROGRAMS)
	        Requires GNU egrep.
EOF
}

mode=tty
while true; do
	case $1 in
		"")
			break
		;;
		"-a")
			mode=all
		;;
		"-l")
			mode=login
		;; 
		"-r")
			mode=root
		;;
		"-t")
			mode=tty
		;;
		"-h")
			printhelp
			exit
		;;
		"-*")
			echo "$SCRIPT_NAME: Unknown option '$1'"
			printhelp
			exit
		;;
		*)
			args="$args $1"
		;;
	esac
	shift
done
set -- $args

pid=${1:-$$}

if ! ps -p $pid >/dev/null 2>&1; then
	echo "$SCRIPT_NAME: Unable to find process '$pid'"
	exit 1
fi

case $mode in 
	all)
		get_parent_users $pid
	;;
	login)
		get_firstuser_after_login $pid
	;;
	root)
		get_firstuser_after_root $pid
	;;
	tty)
		get_firstuser_with_tty $pid
	;;
esac

In this case I needed to patch lftp.

First we set the configuration to support the overlay path (This is done once)


export PORTDIR_OVERLAY="$EPREFIX/usr/local/portage"
cat <<EOF >>$EPREFIX/etc/make.conf
# Overlay
PORTDIR_OVERLAY="$PORTDIR_OVERLAY"
EOF

And then, for any package, we just have to copy the ebuild and its files, and add the new patch (copying the file and updating the ebuild):


# To create a overlay version of any package, just change this variables
pkg=net-ftp/lftp
pkgvers=lftp-4.3.1

# Copy the ebuild
mkdir -p $PORTDIR_OVERLAY/$pkg
cp $EPREFIX/usr/portage/$pkg/$pkgvers.ebuild  $PORTDIR_OVERLAY/$pkg
cp -R $EPREFIX/usr/portage/$pkg/files   $PORTDIR_OVERLAY/$pkg/files

# Do any change.
# e.p. a Simple modification: add patches and add them to the ebuild:
#  cp lftp-solaris-2.10-socket.patch $PORTDIR_OVERLAY/$pkg/files/lftp-solaris-2.10-socket.patch
#  joe $EPREFIX/usr/portage/$pkg/$pkgvers.ebuild
#    +> Add to src_prepare(): epatch "${FILESDIR}/${PN}-solaris-2.10-socket.patch"

# Sign the ebuild
ebuild $PORTDIR_OVERLAY/$pkg/$pkgvers.ebuild digest

Be lazy man!

Publicado: julio 20, 2011 en Sin categoría

No puedo más que decir que estoy totalmente de acuerdo con los comentarios en este post. Siempre he dicho orgullosamente que soy un tio vago, y me gustaría serlo más.

Get the return code of a command before a pipe in Bash

Publicado: febrero 10, 2011 en Sin categoría

Today I discovered a great functionality in bash:

hrivas@ADWMU001:~> false | cat ; echo $? ${PIPESTATUS[0]}
0 1

Great!

If you are behind a proxy that allows HTTPS connections, you can use github via SSH without problems. To do so, you have to use the great tool connect.c ( http://bent.latency.net/bent/git/goto-san-connect-1.85/src/connect.html). As described in its homepage, this program tunnels a connection using a proxy, to allow SSH to connect to servers using a proxy.

You can configure connect as the ProxyCommand for ssh.github.com and github.com hosts in ~/.ssh/config. You can also set the Port to 443 aswell.

Basicly the process will be:

export PROXY=proxy:80

http_proxy=http://$PROXY wget http://www.taiyo.co.jp/~gotoh/ssh/connect.c -O /tmp/connect.c
gcc /tmp/connect.c -o ~/bin/connect 

cat >> ~/.ssh/config  <<EOF

Host ssh.github.com github.com
  Port 443
  HostName ssh.github.com
  IdentityFile $HOME/.ssh/id_rsa
  ProxyCommand $HOME/bin/connect -H proxy:80 %h %p

EOF

And ready!!

git clone git@github.com:keymon/facter.git facter

Easy, isn’t it?

Check connect.c documentation if you need to use an authenticated user in proxy.

When you create a “Virtual Target Disk” or VTD on a VIOS, there is not documented way to define or change the LUN number that it shows to the client partition. But there are situation where you might need to update it:
  1. In a dual VIOS environment, to have the same LUNs in both clients (easier to administrate)-
  2. In a redudant configuration, when you need to start lpars on different hardware, using SAN disks. For instance, we use this configuration for our Backup datacenter where we have al the SAN disks mirrored.

In this post I comment how to update this LUN. The idea is basicly:

  • Set the VTD device to Defined in the VIOS
  • Update the ODM database. You have to update the attribute ‘LogicalUnitAddr’ in ObjectClass ‘CuAt’
  • Perform a ‘cfgmgr’ on the virtual host adapter (vhostX). This will enable the VTD device and reload the LUN number. Perform an cfgmgr on the VTD device does not work.

So, with commands:

$ oem_setup_env
# bash

# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.BBBBBBB-P1-C3-T2-L75

# ioscli mkvdev -vadapter vhost21 -dev host01v99 -vdev hdiskpower1
cfgmgr -l vhost21

# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L75

VTD                   host01v99
Status                Available
LUN                   0x8300000000000000
Backing device        hdiskpower1
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L77

# rmdev -l host01v99
host01v99 Defined

# odmget -q "name=host01v99 and attribute=LogicalUnitAddr"  CuAt
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8300000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6

# odmchange -o CuAt -q "name = host01v99 and attribute = LogicalUnitAddr" <<"EOF"
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8100000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6
EOF

# odmget -q "name=host01v99 and attribute=LogicalUnitAddr"  CuAt
CuAt:
  name = "host01v99"
  attribute = "LogicalUnitAddr"
  value = "0x8100000000000000"
  type = "R"
  generic = "D"
  rep = "n"
  nls_index = 6

# cfgmgr -l vhost21
# lsmap -vadapter vhost21
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost21         U9117.MMA.XXXXXXX-V2-C34                     0x00000016

VTD                   host01v01
Status                Available
LUN                   0x8200000000000000
Backing device        hdiskpower0
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L75

VTD                   host01v99
Status                Available
LUN                   0x8100000000000000
Backing device        hdiskpower1
Physloc               U789D.001.JJJJJJJ-P1-C3-T2-L77

In the client partition, you can scan for the new disk, and it will have the LUN 0x81:

root@host01:~/# cfgmgr -l vio0
root@host01:~/# lscfg -vl hdisk5
  hdisk5           U9117.MMA.XXXXXXX-V22-C3-T1-L8100000000000000  Virtual SCSI Disk Drive

Note: Actually I changed the output of these commands to remove information of my company.

Update: I created an script to do this: change_vtd_lun.sh

Redescubriendo reStructuredText

Publicado: noviembre 2, 2010 en fast-tip, Misc, Technical
Etiquetas:, , ,

reStructuredText

Estoy “redescubriendo” el reStructuredText (también conocido como rst o ReST).

Se trata de un lenguaje de markut (como HTML o SGML) pero human friendly. Similar al lenguaje de los wikis. Consulta la pagina de reStructuredText en la wikipedia para más detalles.

Parece tonto, pero la verdad es que se puede usar para muchas cosas, como lenguaje franco para textos con formato simple: Documentación, sistema de ticketing, manuales…

Pero se puede ir más lejos:

Y aquí vemos cómo generé este post (sin esta parte, para no hacer un post recursivamente infinito :)):

    cat <<EOF | ./rst2wp
    reStructuredText
    ----------------

    Estoy *"redescubriendo"* el reStructuredText_ (también conocido como *rst* o *ReST*).

    Se trata de un lenguaje de markut (como HTML o SGML) pero *human friendly*. Similar al lenguaje de los wikis. Consulta `la pagina de reStructuredText en la wikipedia <http://en.wikipedia.org/wiki/ReStructuredText>`_ para más detalles.

    Parece tonto, pero la verdad es que se puede usar para muchas cosas, como *lenguaje franco* para textos con formato simple: Documentación, sistema de ticketing, manuales...

    Pero se puede ir más lejos:

    - entradas en blog (este post está echo con ReST), usando `este pequeño script en python <http://unmaintainable.wordpress.com/2008/03/22/using-rst-with-wordpress/>`_...

    Si vamos más lejos, `hay quien usa un gestor de versiones junto este script <http://tadhg.com/wp/2009/07/14/blog-workflow-with-restructuredtext/>`_ !. Esto hace rondar una idea por la cabeza... ¿no molaría disponer de un blog en el que pudieras gestionar directamente con git?

    - O incluso presentaciones!, como comentan en esta página del propio docutils:

    * Original en Rest: http://docutils.sourceforge.net/docs/user/slide-shows.txt

    * En HTML normal: http://docutils.sourceforge.net/docs/user/slide-shows.txt

    * Como presentación en `S5 <http://meyerweb.com/eric/tools/s5/>`_: http://docutils.sourceforge.net/docs/user/slide-shows.s5.html::

Y aquí vemos cómo generé este post (sin esta parte, para no hacer un post recursivamente infinito :))::

     cat <<EOF | ./rst2wp
    EOF

Any Linux & Unix admin knowns this fact: GNU tools are MUCH MORE better tools than AIX, BSD, Solaris or HP-UX tools.

GNU tools have much less bugs, much more functionality and options, localization, better documentation, they are standard, most of the scripts are built based on GNU tools, etc, etc,etc. Why the hell they do not throw out their ugly-buggy-limitated tools and install the GNU tools in their systems by default???

Here you have an example of a weird behaviour in the ‘dd’ command in the AIX platform: With the skip=<Num. blocks> parameter the ‘dd’ command skips the blocks, but it actually reads them (no matter if you are working on a filesystem with file random access). So, if you are working with big files (in my case, 50GB) you have to read ALL the blocks in memory before access the requested position. That means huge I/O, usage of memory in cache, etc…

IBM guys: you do not know that there is a lseek(2) function?

Here you have an example of the time that takes read 2MB from a big file, skiping 1000MB. Using native ‘dd’ command takes 12s:

$ time /usr/bin/dd if=a_big_big_file.data skip=1000 bs=1M count=2 of=/dev/null
2+0 records in.
2+0 records out.

real    0m12.059s
user    0m0.013s
sys     0m1.419s

With GNU’s version, less than a second:

$ time /opt/freeware/bin/dd if=a_big_big_file.data skip=1000 bs=1M count=2 of=/dev/null
2+0 records in
2+0 records out

real    0m0.024s
user    0m0.002s
sys     0m0.006s

Note: You can find the GNU’s dd tool in AIX Linux ToolBox coreutils package.

Update: I contacted the IBM support and they told me that using the option conv=iblock ,”dd” will behave as expected. But IMHO the documentation does not explicitily say that:

iblock, oblock
Minimize data loss resulting from a read or write error on direct access devices. If you specify the iblock variable and an error occurs during a block read (where the block size is 512 or the size specified by theibs=InputBlockSize variable), the dd command attempts to reread the data block in smaller size units. If the dd command can determine the sector size of the input device, it reads the damaged block one sector at a time. Otherwise, it reads it 512 bytes at a time. The input block size (ibs) must be a multiple of this retry size. This option contains data loss associated with a read error to a single sector. The oblock conversion works similarly on output.

 

En este post de Google se puede ver un análisis de Google sobre Git vs. Mercurial. En él se intentan justitificar de porqué no dan soporte a Git. El caso son aún más interesantes son los comentarios al artículo.

La verdad es que está claro que es un estudio muy sesgado. Muestran las diferencias entre ambos siempre considerando que lo que hace Git es peor. Además omiten muchas ventajas de Git (como que los commits son nodos que puede reorganizarse) o si las mencionan no loas muestran como ventajas (la posibilidad de usar cómodamente varios branchs).

Incluso incurren en falacias, como lo de que Git permite la perdida del historial con “git push –force”. Para empezar es un –force, es decir, estás haciendo una chapuza poco habitual, y segundo que NO se pierde el historial.

En mi opinión, la principal desventaja, es el soporte de Windows que puede dar algún que otro problema…  y otra cosa que no se menciona: Que Git tiene una comunidad grande (y cada vez más grande) detrás, y además una comunidad de “Hackers”. Y eso es generalmente bueno…

En conclusión, aunque a Google le duela, la realidad es que Git está creciendo de forma explosiva, cada dia decenas de proyectos en Google Code cuelgan el cartelito de “Proyecto hospedado en GitHub” en su sección de código.

No pasa nada si ellos no quieren aceptar esa realidad, al menos permiten hospedar proyectos, con su wiki, tracker y demás y mantener el código en github.com.