Skip to main content

OS File System Comparison

A file system is a compilation of functions that facilitate the reading, writing and executing of files in directories, including how to reference files and directories. File systems change the usability and stability of an operating system and its programs (utilities and applications). When people talk about a Windows-type operating system they assume it was built on the file systems "ntfs3" or "ntfs5". When people talk about a Linux-type operating system they assume it was built on the file systems "ext2", "ext3" or "ext4"; unless stated otherwise. When people talk about a BSD-type operating system they assume it was built on "ufs" or "ufs2". When people talk about a Solaris-type operating system they assume it was built on "ufs" or "zfs". Here's a not-so interesting comparison:

File System OS Type Max. File Max. Vol. Permissions Encryption Compression Recoverable
BtrFS Linux 8-16 EiB 16 EiB POSIX Yes Yes Yes
EXT2 Linux, BSD 16 GB - 2 TB 2-32 TB POSIX, SVR4 No No Yes
EXT3 Linux, BSD 16 GiB - 2 TiB 2-16 TiB POSIX, SVR4 No No No
EXT4 Linux, BSD 16 TiB 1 EB POSIX, SVR4 No No No
FAT32 Windows, Linux 4 GB 4 GB None No No Yes
HFS Linux, Mac OSX 2 GB 2 TB None, ACL No No No
HFS+ Linux, Mac OSX 8 EB 8 EB POSIX Yes Yes Yes
JFS Linux, Unix 4 PB 32 PB SVR4 No Yes No
NTFS3 Windows, Linux, BSD 2-16 TB 4 GiB None, ACL Yes Yes Yes
NTFS5 Windows 16 TB 64-256 TB ACL Yes Yes No
ReiserFS Linux, BSD 8 TiB - 1 EiB 16 TiB POSIX, SVR4 No No No
UFS2 BSD, Solaris 8 EB 8 EB SVR4 No No Yes
XFS Linux, Unix 8 EiB 16 EiB SVR4 No No No
ZFS BSD, Solaris 16 EB 16 EB SVR4 Yes Yes Yes

Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@Gmail.com or 720-584-5229.

Comments

Popular posts from this blog

The meaning of time in reinforcement learning

Reinforcement learning (RL) is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning is concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward through the process of trial and error. In reinforcement learning an agent starts at an empty state then analyzes the available datasets according to a policy of positive states and negative states. Rather than being explicitly taught as in supervised learning the correct set of actions for performing a task, reinforcement learning uses rewards as signals for positive states and punishments as signals for negative states. The agent obtains the best path to a desirable reward as a cumulation of positive states and negative states. As compared to unsupervised learning, reinforcement learning is different in terms of goals. While the goal in unsupervised learning is to find similarities and differences...

OpenStack+Ceph as Software-Defined Storage

SDS reduces the costs of the management of growing data stores by decoupling storage management from its hardware to allow for centralized management of cheaper, popular commodity hardware. The example SDS ecosystem uses open source software like OpenStack as a front-end interface on top of Ceph as the resource provider of a RADOS cluster of commodity solid-state drives. OpenStack provides user-friendly wrappers for accessing and modifying underlying Ceph storage. OpenStack comes in the form of distributed microservices with RESTful API's: Block (Cinder), File (Manila), Image (Glance), and Object (Swift). Each microservice can scale-out as a cluster of stand-alone services to accommodate the varying demands of high-growth storage. With OpenStack the underlying Ceph storage can address the block storage needs, file storage needs, image storage needs, and object storage needs of datacenters adopting open source as their new norm in an industry trend for high performace and high a...

Principal Component Analysis

Principal Component Analysis (PCA) is a common technique in statistical analysis, widely used for pattern recognition, data compression, image preprocessing, signal-noise analysis, and high resolution spectrum analysis. Principal Component Analysis transforms a group of activites into a set of unique components, where each component has a numerical degree of distance and relatedness from an agreed on centered component. The first component has the largest possible variance (it accounts for most of the variability in the group). Each succeeding component has the highest variance that is orthogonal to the preceding components. The transformation of the group proceeds linearly from a group with a high degree of dimensionality to a group with a low degree of dimensionality of which the components of the group with a low degree of dimensionality are uncorrelated. Principal Component Analysis is also used in the forecast of a most likely outcome through time-series analysis and regress...