Skip to main content

Posts

Showing posts from 2013

Big-O Simplified!

Big-O notation describes patterns of activity of systems in terms of standard mathematical models in which an amount of resources (time, quantity, cost, etc.) determines scope. The patterns of activity must contain input that can be isolated and modified to be improved without negatively effecting the desired outcomes of their systems.  Big-O helps systems analysts describe patterns of activity as best-case, acceptable-case, or worst-case.  Big-O is used in business and computer science for documenting system behavior and for arguing system improvements. Patterns of activity of systems are typically expressed as algorithms. In the simplest terms, b ig-O is an estimate of the optimal efficiency of algorithms. When comparing two or more algorithms for optimal efficiency the following criteria must be met: Algorithms were designed using the same language. Algorithms result in the same measure (e.g. running time). Algorithms use the same set of operations. Algorithms have an est

Uploading files through "shaped" connections without traffic control

Technically shaping is limiting the rate at which packets are sent over a connection. If you want to continue surfing the web or interacting with websites while uploading your files to online storage, you have to find some way to shape your connection to your online storage. Not only will you not be able to interact with websites while uploading files, but your transfer statistics won't be accurate. There are really only two ways you can shape a connection without traffic control. You can either use a relay that supports delaying packets or find some way to slowdown the rate at which your file is sent. In any case, the first thing you have to do is determine how much of your bandwidth you want to dedicate to uploading files. Then you have to convert your bandwidth to a unit measurement that is recognizable by the command rsync. When you're ready you can play with the next command. Here's a one-liner for transferring a file from a remote server via sshd then uploading it

Uploading files through Secure WebDAV using DAVfs

WebDAV is a protocol that facilitates uploading and downloading files through HTTP (port 80) and HTTPS (port 443). Whenever a WebDAV service is being ran over SSL it is called Secure WebDAV. DAVfs is a file system interface to the WebDAV protocol, it works with WebDAV and Secure WebDAV. The command mount uses DAVfs to recognize a WebDAV share as a regular file system so that other tools, scripts, services, and users can access the share's contents (as a file system with actual directories). Here's an easy solution for uploading files to your WebDAV account. These instructions work on Linux, FreeBSD, Solaris, and probably other distributions too. 1. Make a local directory for transferring files. mkdir <your directory>; 2. Stop other processes and users from interfering with your transfers. chown root:root <your directory> && chmod 770 <your directory>; 3. Mount your online cloud share using davfs. Enter your password when the prompt appears askin

Solution to udisks helper error

Debian 6.x/7.x relies on udisks to handle accessing, reading and writing of storage media. The utility udisks is an interface to the org.freedesktop.UDisks service on the system message bus. One of the benefits of udisks is to automatically mount storage devices for users without super user privileges, making them accessible via a UUID. Several problems have been attributed to udisks, like the following: Problem mounting external USB drive in Ubuntu 12.04 http://askubuntu.com/questions/150813/problem-mounting-external-usb-drive-in-ubuntu-12-04 Secure remove of external USB-HDD produces error https://bugs.launchpad.net/ubuntu/+source/udisks/+bug/466575   Safely removing device generates error   https://bugs.freedesktop.org/show_bug.cgi?id=25657 The error message generated by udisks is always similar to: Error detaching: helper exited with exit code 1: Detaching device /dev/sdc USB device: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2) SYNCHRONIZE CACHE: FAILED: No such fil

Process Scheduling with Timeout/Watch in Debian 6.x

The commands timeout and watch are great tools to have for reliable, on-the-fly process scheduling; especially in cases where cron isn't working properly and fixing it would take too long. I've included short descriptions of both commands and an example for you to play with.       The command timeout runs a process (command or script) for a period of time, when time runs out the command timeout sends the signal you specified to the process, and if the process is still running after a period of time the command timeout will send the process a KILL signal. The command timeout accepts intervals of seconds (s), minutes (m), hours (h) and days (d), for its period of time. The command timeout accepts all of the signals accepted by the command kill for its signal argument. Here's its command syntax: timeout -s <signal> -k <time with suffix> <duration> <process> <process arguments> The command watch periodically runs a process (command or script) for

Process Scheduling with Cron in Debian 6.x

The daemon cron automatically updates itself every 1 minute (assuming the cron service is running). Cron searches its spool directory "/var/spool/cron/crontabs" for new files named after user accounts in the file "/etc/passwd", then loads those new rules into memory. Users are not allowed to directly modify cron's spool. Users are supposed to modify one or more of cron's writable scheduling files and directories: "/etc/crontab", "/etc/cron.hourly", "/etc/cron.daily", "/etc/cron.weekly", "/etc/cron.monthly", and "/etc/cron.d". Access to those files and directories are controlled by entries added and removed from cron's access control lists.   Cron uses the writable scheduling file "/etc/crontab" to allow applications finer scheduling control than what the scheduling directories "/etc/cron.{hourly,daily,weekly,monthly}" can provide. Most system administrators use the file

OS File System Comparison

A file system is a compilation of functions that facilitate the reading, writing and executing of files in directories, including how to reference files and directories. File systems change the usability and stability of an operating system and its programs (utilities and applications). When people talk about a Windows-type operating system they assume it was built on the file systems "ntfs3" or "ntfs5". When people talk about a Linux-type operating system they assume it was built on the file systems "ext2", "ext3" or "ext4"; unless stated otherwise. When people talk about a BSD-type operating system they assume it was built on "ufs" or "ufs2". When people talk about a Solaris-type operating system they assume it was built on "ufs" or "zfs". Here's a not-so interesting comparison: File System OS Type Max. File Max. Vol. Permissions Encryption Compression Recoverable BtrFS Lin

SSL/TLS OpenVPN with HMAC Authenication

These instructions work on CentOS 6.x, Debian 6.x, Knoppix 6.x and probably other Linux distributions. (Easy-RSA and files kept in the directory /usr/share/doc aren't always available.) How the OpenVPN service runs on the server depends on how the service is configured to accept connections from clients. Additionally, clients have to be configured to communicate with that specific service.  Server Instructions 1. Generate a RSA private key of 1024 bits encrypted using triple DES: openssl genrsa -des3 -out ca.key 1024; 2. Generate a new certificate signing request using your RSA private key: openssl req -new -key ca.key -out ca.csr; 3. Generate a self-signed root certificate that expires in 365 days: openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt; cp ca.crt /etc/openvpn/keys; scp ca.crt root@<client hostname>:/etc/openvpn/keys; 4. Generate a certificate file and a key file for the server, sign it with the root certificate key: openssl req -new -key

New Development Library Testing

Development libraries perform differently in each system. Never assume a library performs the same in your system as what's on the developer's website or what's in their documentation. Always verify the specific features of a library by debugging your own small test programs. 1. Make a new source file "test.c" for verifying a function of a library then compile it using the command "gcc -Wall test.c -o test": #include    <header file> int main(int argc, char **argv) {     int   foo;    /* return value */     if (argc != 3)     {         printf("usage: test <function value 1> <function value 2>");         return -1;     }     foo = function(argv[1], argv[2]);     ...     printf(" ", foo);     return 0; } 2. Run the command strace to see how the function performs its operations: strace -C -f -i -tt -T -v test; strace -C -f -i -tt -T -p <running process pid> -u <username>

System V Shared Memory in Debian 6.x/Knoppix 6.x

Shared memory is a System V IPC object that allows processes to share the same pages of memory. Any process can create shared memory, modify it and leave it for other processes to modify later on. Shared memory exists until no processes are using it or system shutdown. The data structure of the shared memory object is: struct shmid_ds {   struct ipc_perm shm_perm;   size_t shm_segsz;        /* size of segment */   pid_t shm_cpid;          /* PID of creator */   pid_t shm_lpid;          /* PID, last operation */   shmatt_t shm_nattch;     /* no. of current attaches */   time_t shm_atime;        /* time of last attach */   time_t shm_dtime;        /* time of last detach */   time_t shm_ctime;        /* time of last change */ }; Both commands list all the shared memory objects in use on the system: cat /proc/sysvipc/shm; ipcs -m; ipcs -m -t; ipcs -m -p; ipcs -m -c; ipcs -m -l; ipcs -m -u; Create shared memory objects on the system: ipcmk -M <size in bytes> -p &

System V Semaphore Arrays in Debian 6.x/Knoppix 6.x

A semaphore array is a System V IPC object that allows processes to synchronize access to shared resources. Any process can create a semaphore array, modify it and leave it for other processes to modify later on. Any process can immediately remove a semaphore array regardless of whether another process is using it. Semaphore arrays exist until they are removed or system shutdown. The data strucuture of the semaphore array object is: struct semid_ds {   struct ipc_perm sem_perm;   time_t sem_otime;          /* last operation time  */   time_t sem_ctime;          /* last change time */   unsigned long sem_nsems;   /* count of sems in set */ }; Both commands list all the semaphore array objects in use on the system: cat /proc/sysvipc/sem; ipcs -s; ipcs -s -t; ipcs -s -p; ipcs -s -c; ipcs -s -l; ipcs -s -u; Create semaphore array objects on the system: ipcmk -S <number of elements> -p <permission bits>; Remove semaphore array objects from the syst

System V Message Queues in Debian 6.x/Knoppix 6.x

A message queue is a System V IPC object that allows different or unrelated processes to exchange messages. Any process can create a message queue, modify it and leave it for other processes to modify later on. Any process can immediately remove a message queue regardless of whether another process is using it. Message queues exist until they are removed or system shutdown. The data strucuture of the message queue object is: struct msqid_ds {   struct ipc_perm msg_perm;   msgqnum_t msg_qnum;    /* no of messages on queue */   msglen_t msg_qbytes;   /* bytes max on a queue */   pid_t msg_lspid;       /* PID of last msgsnd(2) call */   pid_t msg_lrpid;       /* PID of last msgrcv(2) call */   time_t msg_stime;      /* last msgsnd(2) time */   time_t msg_rtime;      /* last msgrcv(2) time */   time_t msg_ctime;      /* last change time */ }; Both commands list all the message queue objects in use on the system: cat /proc/sysvipc/msg; ipcs -q; ipcs -q -t; ipc

Self-signed SSL certificates for private servers

Self-signed SSL certificates aren't substitutes for commercial certificates for your publicly available servers, but they will prevent intruders from eavesdropping on or breaking into your services while you're using your service's configuration management application. OpenSSL comes installed in almost all Linux and BSD operating systems, except for source-based operating systems. Here's a template for making your own certificates: 1. Generate a RSA private key of 1024 bits encrypted using triple DES: openssl genrsa -des3 -out server.key 1024 2. Generate a new certificate signing request (CSR) using your RSA private key: openssl req -new -key server.key -out server.csr Country Name (2 letter code) [default country code]: <country code> State or Province Name (full name) [default state]: <state> Locality Name (e.g. city) [default city]: <city> Organization Name (e.g. company) [default company name]: <company name> Organizati

Setting up MySQL 5.x in CentOS/Debian 6.x

After you've installed MySQL and its required dependencies in your server, configure it using the following steps. 1. Start the MySQL service: /etc/init.d/mysql start;  2.  Log into your MySQL service, the password should be empty: mysql -u root -p; 3. Update the password for the user account root of your MySQL service: UPDATE mysql.user SET Password=PASSWORD('password') WHERE User='root'; FLUSH PRIVILEGES; 4. Create a new MySQL database for testing purposes: CREATE DATABASE <db name>; 5. Add a new user account for accessing your new MySQL database: INSERT INTO mysql.user (User,Host,Password) VALUES('user name','host',PASSWORD('password')); FLUSH PRIVILEGES; 6. Grant all access to the database <db name>: GRANT ALL PRIVILEGES ON <db name>.* to <user name>@<host>; FLUSH PRIVILEGES; 7. Change the default runlevels for your MySQL service: chkconfig --levels 35 mysql on; 8. Autom

Setting up PostgreSQL v9.1 in Debian GNU/Linux 6.x

After you've installed PostgreSQL and its required dependencies in your Debian server, configure it using the following steps. 1. Check that the user postgres was automatically created for you: grep postgres /etc/passwd; 2. Start the PostgreSQL service: /etc/init.d/postgresql start;  2. Log in as the user postgres: su - postgres; 3. Log in to the PostgreSQL database using the interactive client application psql: psql <options> -U postgres; 4. Set a password for the user postgres in the PostgreSQL database: password; 5. Create a new PostgreSQL database for testing purposes, then disconnect from your PostgreSQL service: create database <db name> owner=postgres; exit; 6. Edit the file "/etc/postgresql/9.1/main/postgresql.conf" that enables and disables settings for server connections to your PostgreSQL service. 7. Edit the file "/etc/postgresql/9.1/main/pg_hba.conf" that enables and disables settings for incoming network c

Setting up PostgreSQL in RHEL/CentOS 6.x

After you've installed PostgreSQL and its required dependencies in your CentOS server, configure it using the following steps. 1. Add the user postgres to your server. You might have to change your server's user and group policy settings in the file "/etc/adduser.conf": adduser <options> postgres; 2. Create the directory data to store datafiles of the PostgreSQL database: mkdir -p /usr/local/pgsql/data; 3. Change ownership of the directory data from root to postgres: chown <options> postgres /usr/local/pgsql/data; 4. Log in as the user postgres: su - postgres; 5. Create a default PostgreSQL database using datafiles to be stored in the directory "/usr/local/pgsql/data": initdb <options> -D /usr/local/pgsql/data; 6. Start the PostgreSQL database service: postgres <options> -D /usr/local/pgsql/data; 7.  Create a new default PostgreSQL database: createdb <options> <db name>; 9. Automatically star

Setting up NFS v4.0 in Debian GNU/Linux 6.x

In Debian, the NFS service does not rely on a single application, but several utilities working together. The exact service names and their options depend on which packages you've installed to support the running of your NFS server. Services that support running your NFS service might be named some variation of nfsd, lockd, rquotad, mountd, and statd.   After you've installed NFS v4.0 and its required dependencies in your Debian server, there are only five steps to configure it. Server Instructions 1. Edit the file "/etc/exports" that's the access control list for serving directories of file systems to NFS clients: /<directory>  <hostname or fqdn>(options) ... /<directory>  <ip address>/<prefix length>(options) ... 2. Automatically start each service used by your NFS server on boot up:  update-rc.d <service> <options>; 3. Edit the file "/etc/hosts.allow" that's the hosts access control list

Setting up NFS v4.0 in RHEL/CentOS 6.x

After you've installed NFS v4.0 and its required dependencies in your CentOS server, there are only seven steps to configure it. Server Instructions 1. Edit the file "/etc/exports" that's the access control list for serving directories of file systems to NFS clients: /<directory>  <hostname or fqdn>(options) ... /<directory>  <ip address>/<prefix length>(options) ...  2. Change the default runlevels for the services used by your NFS server: chkconfig --levels 35 nfs on; chkconfig --levels 35 portmap on; 3. Automatically start each service on boot up: update-rc.d portmap <options>; update-rc.d nfs <options>; 4. Start each service used by your NFS server: service portmap start; service nfs start; 5. Edit the file "/etc/hosts.allow" that's the hosts access control list for allowing access to services on your server from specific hostnames, IP addresses, networks, and FQDNs: <service

Hiding poor performance in processor cycles

Solaris remains the only operating system capable of microstate accounting. Other operating systems like Linux, FreeBSD, and Windows can not accurately report how their kernel spends an application's user mode time and kernel mode time. Although, Linux kernel 3.0 says it supports microstate accounting it only reports approximate time spent as percentages. What that means is applications can hide their poor performance in processor cycles. Even if your kernel doesn't support microstate accounting you can still investigate your application for poor performance using commands like time, killall, timeout and vmstat. The command time reports three psuedo time values of a running process: real time, user time, and system time. Only real time is valuable, but it isn't actually real time. Run time using your application's absolute path as its argument: time <command> 2>&1; Once your application opens, guide your application to the point you wa

Makefile template

Very useful for organizing large programs, but complicates smaller ones. Reinforces best programming practices of modularizing, minimizing  and recycling code. Makefile Template # macro modifiers <compiler name> = <compiler command> <compiler flags name> = <list of compiler flags with arguments> <variable name> = <list of files> # macros with %controls, modifiers and tokens <variable case ...>:    <modifiers> <list of files> <output case>: <object files>    <modifiers> -o <program name>   <object case ...>: <list of dependencies>    <modifiers> <source file> <command case>:    <command> <arguments> <list of files> <clean case>:    <command> <arguments> <list of files> Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@Gmail.com or 720