Skip to main content

Hiding poor performance in processor cycles

Solaris remains the only operating system capable of microstate accounting. Other operating systems like Linux, FreeBSD, and Windows can not accurately report how their kernel spends an application's user mode time and kernel mode time. Although, Linux kernel 3.0 says it supports microstate accounting it only reports approximate time spent as percentages. What that means is applications can hide their poor performance in processor cycles.

Even if your kernel doesn't support microstate accounting you can still investigate your application for poor performance using commands like time, killall, timeout and vmstat.

The command time reports three psuedo time values of a running process: real time, user time, and system time. Only real time is valuable, but it isn't actually real time. Run time using your application's absolute path as its argument:

time <command> 2>&1;

Once your application opens, guide your application to the point you want to check then kill it by sending your application the signal SIGTERM, SIGSTOP or SIGKILL. (You could also start using it as you normally would then kill it when it starts hanging or slowing down.) Run killall to kill all processes owned by your application:

killall -s <signal> <command or pid>;

Run your application again, but this time use your application's real time as the argument for the command timeout along with "vmstat 1". The command timeout runs vmstat every 1 second for the amount of real time. Guide your application to the same point you wanted to check before. The command vmstat will display CPU information every second leading up to your checkpoint including the processing of the kill signal. Here's the command to run:

<command> 2>&1; timeout <real time> vmstat 1 2>&1;

Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@Gmail.com or 720-584-5229.

Comments

Popular posts from this blog

OpenStack+Ceph as Software-Defined Storage

SDS reduces the costs of the management of growing data stores by decoupling storage management from its hardware to allow for centralized management of cheaper, popular commodity hardware. The example SDS ecosystem uses open source software like OpenStack as a front-end interface on top of Ceph as the resource provider of a RADOS cluster of commodity solid-state drives. OpenStack provides user-friendly wrappers for accessing and modifying underlying Ceph storage. OpenStack comes in the form of distributed microservices with RESTful API's: Block (Cinder), File (Manila), Image (Glance), and Object (Swift). Each microservice can scale-out as a cluster of stand-alone services to accommodate the varying demands of high-growth storage. With OpenStack the underlying Ceph storage can address the block storage needs, file storage needs, image storage needs, and object storage needs of datacenters adopting open source as their new norm in an industry trend for high performace and high a

Network traffic monitoring in Linux with Python

You can investigate suspicious activity in your network traffic by collecting relevant machine data from your endpoint. You can use the machine data to create your own analysis. Before you start your investigation you will need to determine normal activity on your endpoint. Normal activity is the scope of functionality of the software on your endpoint during periods of low activity and high activity. You will need some kind of software that periodically collects specific machine data from your endpoint like my software developed in Python that's available for free download at https://github.com/davidbrennerjr/server-stats-collector Ingest one or more of the following machine data: Application specific logs from /var/log Raw dumps from sniffing at Layers 2-3 Raw dumps from /proc of kernel data structures Raw dumps of kernel routing tables General system-wide error messages from /var/log/syslog Do you

Continuous Integration (CI) Best Practices

Continuous Integration (CI) automates the building and testing of software in a test environment whenever a change is committed to a revision control system. CI performs QA testing of a change before adding it to the current working version. CI makes sure all development can be integrated into a build. CI Best Practices 1. Maintain a test environment that's a clone of the production environment. 2. Maintain a revision control system such as CVS, SVN or Git. 3. Automate the building of software and the documenting of code in the test environment. 4. Automate QA testing of a change then report that change to developers. 5. Commit changes regularly to avoid integration conflicts. 6. Monitor the revision control system for a commit then build the software before replacing the current working version. Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@Gmail.com or 720-584-5229.