NFS or iSCSI – The debate is (maybe) over?

If you were to search the Internet for help choosing NFS or iSCSI as your networked storage protocol, you would find dozens of articles that will show you the differentiation between the two, and thusly the benefits of choosing one over the other. No one says “use NFS” or “use iSCSI.” It is left to the reader to derive the best route to go.

That changes today, kind of.

One thing I need to get out of the way is that, when it comes to storage, LPS Integration follows the VMware rule: Fibre Channel first, NFS second, and iSCSI third. This is a great overall storage theme however there are many reasons why this broad philosophy for storage does not work for everyone. Installing a brand new Fibre Channel infrastructure could be expensive, depending upon the size of the environment. Host Bus Adapters (two ports per host), two or more switches (1-U switches are not expensive), and physical fiber adds up in large environments. Fibre Channel is a block-only protocol, where some may want the flexibility of using file protocols for shared workloads. Anyway, we have established that Fibre Channel could be expensive to implement.

The alternative is IP-networked storage: NFS or iSCSI. Since these protocols use what could be an existing IP network, they are typically less expensive to deploy, and often what decision makers turn to (and salespeople settle for) when cost becomes a major sticking point in a storage purchase. However, if the network is full of antiquated switches – anyone still have 100Mbps switches? We see them frequently – the cost will be equivalent or more than Fiber. The IP standard today is 10Gbps, with 40Gbps becoming the new standard in the near future, so if the network is full of 1Gbps switches, new switches will likely be part of the IP storage purchase.

This is a great place to mention the Fibre Channel over Ethernet (FCoE) IP protocol. This is similar to Fibre Channel in its cost to implement, since special Converged Network adapters are needed, switches must have the FCoE capability, and storage must have special FCoE adapters. Perhaps a discussion for another blog.

iSCSI is the direct equivalent to Fibre Channel from the perspective that it does the same job as Fibre Channel. However, it is not a simple protocol. It encapsulates the SCSI storage protocol within the IP protocol. Encapsulates, in this case, means a bit of overhead. A lot of work has to happen at the source and target in a storage network conversation in order to find the bits of data that are being read or written and stripping them out of the packet. Precious CPU cycles and memory make this happen, unless a TOE (TCP/IP Offloading Engine) is purchased. This makes iSCSI the more complex of the two IP storage protocols.  Sessions to iSCSI storage are usually from a single initiator (host or client) unless some type of clustering software is used, such as Microsoft Clustering, iSCSI reservations, or VMware Atomic Test and Set. Otherwise, data could become corrupt if multiple clients are writing data.

From an administrative standpoint, iSCSI also is a little more difficult to implement. IQNs for hosts and storage, multiple VLANs (best practice), iSCSI service configurations, LUNs, masking, etc. provide for a time-consuming setup for connecting to most block storage.

NFS on the other hand is a protocol in itself. No encapsulation, just clean IP storage data. This makes NFS more efficient. It is also a shared protocol, allowing any client that is provided access by its IP address to consume (read and write) the shared data.

NFS is also simple to administer. Create a file system (depending on the storage vendor), export the file system, and provide access by client IP. Consume. Simple. Plug-ins for NFS for most storage vendors? Check. VMware vSphere loving to pull VMs from NFS? Check. Xenserver support? Yep. High performance? Yep.

So, it is a no-brainer that NFS is the route to take when it comes to IP storage, right? Not necessarily.

Just like everything else in the technology world, it depends. Anything needing specific block access to storage, such as boot-from-SAN, has to use a block storage protocol. But, if we’re having the NFS versus iSCSI conversation, it is likely that the storage that was purchased was solely for accessing data, not necessarily booting from SAN or other block activity. Also, Windows doesn’t natively support access to NFS. A simple client installation allows NFS mounts to be accessed though. Hyper-V does not in any way support NFS for housing VMs. Performance-wise, there is also a little degradation when accessing millions of small files.

So it is solved then! NFS, always. (Except if you need boot-from-SAN, or run Hyper-V, or Windows, or access to lots of files).

If you want to geek out on the difference between these two IP-Storage protocols, and take a deeper dive into the mysteries of IP-storage take a look at this doctorate-level comparison.

A Performance Comparison of NFS and iSCSI for IP-Networked Storage http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.545.7362&rep=rep1&type=pdf

Recent Posts