Skip to main content

12 Principles of Secure Linux Programming

1. Operate with least privilege

  • Hold privileges only while they're required.
  • Drop privileges permanently when they'll never be used again.
  • A privileged program should never exec a shell unless the environment can silently ignore set-user-ID and set-group-ID permissions.
  • Close all unnecessary file descriptors before an exec.

2. Avoid exposing sensitive information

  • Use mlock to lock access to the virtual memory page in use.
  • Prevent core dumps by using setrlimit.

3. Confine the process

  • Use capabilities and securebit flags whenever possible.
  • Establish a chroot jail to limit the set of directories and files that a program may access.
  • Use a virtual server: UML, Xen, KVM, etc.

4. Beware of signals and race conditions

  • Signals should be caught, blocked, or ignored to prevent possible security problems.

5. Pitfalls of file operations and file I/O

  • Umask should be set to a value that ensures the process never creates publicly writable files.
  • Use seteuid and setreuid to temporarily change process credentials to ensure new files do not belong to the wrong user.
  • Never create a file owned by the program owner.
  • Never allow other users to write to files that the process uses.
  • Use mkstemp to create files with unpredictable names.

6. Don't trust inputs or the environment

  • Do not assume values of environment variables are reliable.
  • Validate all inputs from untrusted sources.
  • Avoid unreliable assumptions about the process's run-time environment.

7. Beware of injection attacks

  • Use regex pattern matching to filter user inputs.
  • Encode user input into Base-64 encoded strings.
  • Never compare raw strings when checking credentials.

8. Beware of buffer overruns

  • Do not allow an input value or a copied string to exceed allocated buffer space.
  • Use if statements that prevent buffer overruns.

9. Beware of denial-of-service attacks

  • Minimize the risk and consequences of overload attacks.
    • Perform load throttling.
    • Use resource limits and disk quotas.
  • Employ timeouts for communication with clients.
  • Perform log throttling.
  • Perform bounds checking on data structures.
  • Design data structures that avoid algorithmic-complexity attacks.

10. Beware of database attacks

  • Always use a mutex to ensure the right process has access at the right time.
  • Use databases that enforce host-based security policies.
  • Use databases that enforce separation of database ownership and table ownership.

11. Check return statuses and fail safely

  • Always check the program's return values.
  • Store the value then check it before returning it.
  • Return it then ask for verification.
  • Unexpected situations must cause the program to terminate or drop the client request.

12. Beware of reverse-engineering attacks

  • Avoid hardcoding sensitive information in string literals.
  • Generate one-time IDs for runtime execution.
  • Salt IDs with unique hardware device addresses.

Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@Gmail.com or 720-584-5229.

Comments

Popular posts from this blog

OpenStack+Ceph as Software-Defined Storage

SDS reduces the costs of the management of growing data stores by decoupling storage management from its hardware to allow for centralized management of cheaper, popular commodity hardware. The example SDS ecosystem uses open source software like OpenStack as a front-end interface on top of Ceph as the resource provider of a RADOS cluster of commodity solid-state drives. OpenStack provides user-friendly wrappers for accessing and modifying underlying Ceph storage. OpenStack comes in the form of distributed microservices with RESTful API's: Block (Cinder), File (Manila), Image (Glance), and Object (Swift). Each microservice can scale-out as a cluster of stand-alone services to accommodate the varying demands of high-growth storage. With OpenStack the underlying Ceph storage can address the block storage needs, file storage needs, image storage needs, and object storage needs of datacenters adopting open source as their new norm in an industry trend for high performace and high a

The meaning of time in reinforcement learning

Reinforcement learning (RL) is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning is concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward through the process of trial and error. In reinforcement learning an agent starts at an empty state then analyzes the available datasets according to a policy of positive states and negative states. Rather than being explicitly taught as in supervised learning the correct set of actions for performing a task, reinforcement learning uses rewards as signals for positive states and punishments as signals for negative states. The agent obtains the best path to a desirable reward as a cumulation of positive states and negative states. As compared to unsupervised learning, reinforcement learning is different in terms of goals. While the goal in unsupervised learning is to find similarities and differences

Principal Component Analysis

Principal Component Analysis (PCA) is a common technique in statistical analysis, widely used for pattern recognition, data compression, image preprocessing, signal-noise analysis, and high resolution spectrum analysis. Principal Component Analysis transforms a group of activites into a set of unique components, where each component has a numerical degree of distance and relatedness from an agreed on centered component. The first component has the largest possible variance (it accounts for most of the variability in the group). Each succeeding component has the highest variance that is orthogonal to the preceding components. The transformation of the group proceeds linearly from a group with a high degree of dimensionality to a group with a low degree of dimensionality of which the components of the group with a low degree of dimensionality are uncorrelated. Principal Component Analysis is also used in the forecast of a most likely outcome through time-series analysis and regress