Posts etiquetados ‘efficiency’

Any Linux & Unix admin knowns this fact: GNU tools are MUCH MORE better tools than AIX, BSD, Solaris or HP-UX tools.

GNU tools have much less bugs, much more functionality and options, localization, better documentation, they are standard, most of the scripts are built based on GNU tools, etc, etc,etc. Why the hell they do not throw out their ugly-buggy-limitated tools and install the GNU tools in their systems by default???

Here you have an example of a weird behaviour in the ‘dd’ command in the AIX platform: With the skip=<Num. blocks> parameter the ‘dd’ command skips the blocks, but it actually reads them (no matter if you are working on a filesystem with file random access). So, if you are working with big files (in my case, 50GB) you have to read ALL the blocks in memory before access the requested position. That means huge I/O, usage of memory in cache, etc…

IBM guys: you do not know that there is a lseek(2) function?

Here you have an example of the time that takes read 2MB from a big file, skiping 1000MB. Using native ‘dd’ command takes 12s:

$ time /usr/bin/dd if=a_big_big_file.data skip=1000 bs=1M count=2 of=/dev/null
2+0 records in.
2+0 records out.

real    0m12.059s
user    0m0.013s
sys     0m1.419s

With GNU’s version, less than a second:

$ time /opt/freeware/bin/dd if=a_big_big_file.data skip=1000 bs=1M count=2 of=/dev/null
2+0 records in
2+0 records out

real    0m0.024s
user    0m0.002s
sys     0m0.006s

Note: You can find the GNU’s dd tool in AIX Linux ToolBox coreutils package.

Update: I contacted the IBM support and they told me that using the option conv=iblock ,”dd” will behave as expected. But IMHO the documentation does not explicitily say that:

iblock, oblock
Minimize data loss resulting from a read or write error on direct access devices. If you specify the iblock variable and an error occurs during a block read (where the block size is 512 or the size specified by theibs=InputBlockSize variable), the dd command attempts to reread the data block in smaller size units. If the dd command can determine the sector size of the input device, it reads the damaged block one sector at a time. Otherwise, it reads it 512 bytes at a time. The input block size (ibs) must be a multiple of this retry size. This option contains data loss associated with a read error to a single sector. The oblock conversion works similarly on output.

 

I will describe my configuration of puppetmaster server and puppet dashboard server running into the Debian’s Apache installation, but:

  • Using a custom or different user, not www-data and root. This is good to keep all puppet configuration and data with a different user than www-data and root.
  • Using a custom configuration directory, not default apache directory: /etc/apache2
  • You can issolate puppet server from the rest of apache applications.

First I recommend you to read the official documentation:

It is supposed that you have a running puppet installation.

(más…)

To integrate a Linux system with a centralized user directory (like Microsoft Active Directory) the usual solution is to configure Kerberos for Authentication (password/credential checking) and LDAP for Authorization and Access Control. The “standarized” way to implement this is using libpam_krb5libnss_ldap (by padl software) and nscd (from libc).

Kerberos integration works pretty well and I do not have too many issues with it, but I can not say the same from libnss_ldap and nscd.

In this post I will explain the anoying problems that you can find using libnss_ldap and nscd, and propose some solutions and configurations that will make it work properly. I also recomend read a previous post about the problems and solutions with connecting an Unix server to Active directory (Spanish post).

Read this article if you are experiencing problems with nscd+libnss_ldap (quoting http://www.nico.schottelius.org/blog/nscd-bugs/):

  • Sometimes it consumes 100% cpu (and does not stop that until being killed)
  • Sometimes it just crashes.
  • Sometimes it causes users to “vanish”
  • Sometimes it hangs and thus slows down the whole system
  • Sometimes it makes all the host work slow
  • Sometimes login a host or execute sudo/su takes a lot of time or never logins.
  • Sometimes sudo or su dies with “Segmentation Fault”
  • Sometimes a simple ‘ls’ command takes minutes.
  • etc…

(más…)

Puppet architecture needs a client to connect to the server to load the configuration usin a pull schema. But I do not like to have more and more daemons around and some people suggest avoid that , so I decided to execute puppet using ‘–onetime’ option from cron.

Obviously, I want to configure this using puppet itself. And we must ensure that the clients are executed at different times, not all at the same minute.

I searched the net and I found several aproaches to do this. There are also feature requests.

I read somewhere that the new function fqdn_rand() could be used, as proposed in the feature request and posted in this mail from Brice Figureau. I can not find where the hell the snippet was. At the end, I found this pastie by Jhon Goebel.

I will post my version here just to keep it wrote down.

 $first = fqdn_rand(30)
 $second = fqdn_rand(30) + 30
 cron { "cron.puppet.onetime":
 command => "/srv/scripts/puppet/puppet.ctl.sh onetime > /dev/null",
 user => "root",
 minute => [ $first, $second ],
 require => File["/srv/scripts/puppet/puppet.ctl.sh"],
 }

… this is another random thinking from keymon (https://keymon.wordpress.com)


									

I want to use SSH to remotelly monitor some hosts. The problem with ssh is the overhead related to the creation of new connections.

But we can use the functionality of Control Master from OpenSSH (seehttp://www.revsys.com/writings/quicktips/ssh-faster-connections.html). Using it, ssh will connect only once reusing the connection.

This is great for monitoring scripts.

To use Control Master, you have to execute something like:

ssh -o "ControlMaster=yes" -o "ControlPath=/somepath/ssh-%r@%h:%p" -N user@host

The idea: create a remote user in each monitored host, stablish persistent ssh connection and use Control Master for the monitoring scripts. To deamonize and control the ssh master connections, I will use runit

(más…)