Fibre Channel over Ethernet (FCoE) products are starting to appear in the marketplace. But observers of the storage marketplace disagree about when FCoE will begin to see significant adoption, as well as when FCoE's main value proposition -- lowering costs by consolidating networks -- will actually be realized.
Also being debated is when the price of 10 Gbps FCoE will fall below the current cost of 8 Gbps Fibre Channel. Some observers say that it's years off, while officials from vendors, such as Cisco, claim that FCoE is actually less expensive than than 8 Gbit Fibre Channel today.
One thing is clear: The typically conservative storage market is currently engaged in kicking the tires of FCoE. And when companies are doing that for a new storage technology, that means that storage administrators are asking lots of questions about FCoE.
In this FAQ guide, Greg Schulz, founder and senior analyst with Storage IO Group, and a frequent speaker at storage conferences, lists some of the questions about FCoE he is hearing most frequently during his travels.
Below, you can read his answers to these frequently asked questions or download a podcast of the Q&A.
Table of contents:
What is FCoE?
Where does FCoE fit in the storage marketplace?
What is Data Center Ethernet?
Do I needFCoE of 10 Gigabit Ethernet?
How does FCoE compare to iSCSI?
What is the status of FCoE standards?
When will FCoE products become available?
What Fibre Channel over Ethernet (FCoE) does is pick up everything from Fibre Channel except the cabling and the physical interface. It picks up all the upper-level protocols, all the data integrity checks and all the flow control and grafts them onto Ethernet. Not onto IP, not onto TCP/IP -- picking up Fibre Channel and placing it right on top of an enhanced Ethernet.
While FCoE is based on Ethernet, it's an enhanced Ethernet that has been optimized for low latency, quality of service, guaranteed delivery and other functionality traditionally associated with a channel-type interface like parallel SCSI, Fibre Channel, FICON or ESCON.
Today, FCoE is for early adopters. It's for those kicking the tires, for those looking to get an early jump, test the technology and address particular pain points. Moving forward, it will sit at the high end of the market in the data center, because FCoE is not a wide area technology -- it's not a long-distance interface.
In the not-so-distant future, FCoE will be used where performance, scalability and low latency are paramount. It will be used for getting a converged enhanced Ethernet into a simplified environment, much where you would find Fibre Channel today.
Data Center Ethernet, or DCE, is the acronym that Cisco has been using. It refers to an enhanced, optimized Ethernet that has Fibre Channel grafted on top of it. Then there's Converged Enhanced Ethernet or Converged Enterprise Ethernet -- the term being used by IBM, Brocade -- essentially everybody else except Cisco.
That name, Converged Enhanced Ethernet, conveys that it's an enhanced Ethernet that is more than just your standard 10/100 or 1 Gbit or 10 Gigabit Ethernet. The reason that it's enhanced is that it does have the lower latency capabilities, the quality-of-service capabilities, the predictability and some other optimization capabilities to support premium-type traffic. Premium also means it's going to have a higher price.
What will happen over time is that the two camps will come together and settle on one standardized approach.
It depends on your environment and your market. Sure, 10 Gigabit Ethernet exists today. The prices for the adapters have come down dramatically. You can now get a 10 Gigabit Ethernet adapter with optics -- that's the transceivers -- maybe with cable, for around $1,000 to $1,200. That's the price you would normally pay for Fibre Channel adapters. Those prices continue to come down.
So why is FCoE needed in different markets? If you go downmarket into an SMB space, it's blurred as to whether the need is for 10 Gigabit Ethernet with iSCSI or 10 Gigabit Ethernet with NAS, versus, say, 4 Gbit or 8 Gbit, let alone FCoE.
Going forward, part of that question gets resolved in that a single adapter gets put into the server. That single adapter is an enhanced Ethernet adapter that has the ability to run both Fibre Channel stacks for talking to storage, as well as Ethernet-based stacks for supporting things like TCP/IP for iSCSI, for NFS, for CIFS, for HTTP, as well as other activities, all on one single adapter. For redundancy purposes you put a pair of them in there.
Getting back to the speed aspect of FCoE, where is it needed? Certainly more in the enterprise where there's more scaling. But keep this in mind: Normally, when speeds and feeds are talked about, whether it's 10 Gigabit Ethernet for iSCSI, or 10 Gbit Fibre Channel in the case of FCoE, most of the focus centers around bandwidth and around throughput.
This then leads to the discussion of, 'Hey, my applications don't require that much bandwidth. I don't need that.' That's a valid concern but flip it around the other way. Most applications do have a concern about response time. They do have a concern about latency or a concern about the number of I/Os, the number of transactions, the number of files, the number of videos or the number of messages processed per second.
So, while bandwidth may not be the concern, there certainly is the benefit of lower response time, lower latency, as well as supporting more IOPS. It gets even more interesting when you start looking at consolidated environments, for example, using server virtualization to aggregate multiple physical servers. In the past, you might have had 10 servers, each running at just under a megabyte per second, hardly requiring a 1 Gigabit Ethernet, let alone a 1 Gbit, 2 Gbit, 4 Gbit or 8 Gbit Fibre Channel. But when you aggregate 10 of those systems together onto one physical server, it's straight math. All of a sudden it's adding up to 8 Gbit, 10 Gbit or more. So, we start to see the aggregation play, whether it be on megabytes per second throughput, whether it's on IOPS, transactions, files, messages processed or on lower latency.
The other piece to that is this: Even though networks have gotten faster, storage has gotten larger with more processing power. We've got more data to process now. Going forward, we're going to need those capabilities.
First, there's a perception that iSCSI will be able to do everything that FCoE can do and vice versa. The reality is that iSCSI's fundamental value proposition, at least today, has been about ease of use and low cost. Whereas the fundamental value proposition of FCoE is not of low cost, but of a convergence of multiple technologies -- Fibre Channel, Ethernet, the Fibre Channel upper-level protocol stacks, such as Fibre Channel on SCSI and FICON, coexisting on an Ethernet without the need for IP.
Going forward, FCoE is positioned more toward the upper part of market where iSCSI really hasn't seen much adoption. Part of that has to do with the fact that for iSCSI to play in the upper market requires extra hardware and extra capabilities, which play counter to its low-cost value proposition.
Now let's flip-flop that. Where iSCSI continues to find increasing market share is in the lower part of the market -- the midmarket to lower part of the SMBs, maybe into SOHO, even the upper parts of SMB; certainly making some inroads into different parts of the enterprise. Likewise, FCoE will trickle down from the enterprise into the upper reaches of the SMB market.
That midmarket is the high end of where iSCSI plays and the low end of where FCoE plays. That's where you'll hear all the noise, that's the skirmish battleground between FCoE and iSCSI. But the reality is that FCoE -- at least in the near term -- because of its cost, because of its premium price, will not be able to penetrate down into the lower portions of the SMB, which is where iSCSI is strong.
Likewise, for iSCSI to move upmarket, it has to come in at a higher price/capability. But that plays counter to its value proposition.
Both iSCSI and FCoE will have their respective strengths, their respective home fields, so to speak. There will be border skirmishes, but Fibre Channel will, at least for next couple of years, continue to be popular in environments that are risk-averse, that don't want to jump to FCoE, that want to take more of wait-and-see and go from 4 Gbit to 8 Gbit to 16 Gbit Fibre Channel, and maybe even beyond that, depending on their comfort level.
So the lower parts of the market, the SMBs, some will jump to iSCSI, some will jump to FCoE, some may continue on to the next version of Fibre Channel. FCoE is for the higher end of the market. It picks up where Fibre Channel is currently at and moves forward.However, the piece that needs to be kept in perspective is NAS/NFS/CIFS/Windows file serving, NFS file serving. That market continues to grow at the low end, the SOHO through the mid-sized SMB segment and even in the enterprise. The reason to keep an eye on NFS and NAS Windows CIFS is that they all rely on IP, they all rely on Ethernet. Since IP and Ethernet and NAS can run on a converged Ethernet just as they run on a regular Ethernet, there are some interesting benefit capabilities there.
The Ethernet piece is being worked on by the IEEE, and this piece includes the different enhancements to this premium type of Ethernet, which supports Quality of Service, which supports the lower latency and supports other enhancements, as well. These are pretty well established.
Then there's the Fibre Channel pieces, which are still being finalized and should be up for vote anytime now. Some of those pieces have been voted on, some are about to be voted on. I would say the standards are pretty far along, pretty far solidified. Some finalization needs to be done, but then, as with any standard, once the standard is approved, there is a lag time, a latency period between when everything is approved voted on and actual adoption appears.
So there are early products in the market, there are early proof-of-concepts, the early technology demonstrators. So there are early capabilities out there for those who want to kick the tires, to try things out and see where they fit, for those who like to be on bleeding-edge adoption. Those capabilities are there. The products are maturing, interoperability is getting better. But where this technology is really positioned is for the higher end of data center, for the premium type market. In that market, the only way they go bleeding edge is if there's a blood bank next door because they're very, very risk-averse.
It's similar to when Fibre Channel and FICON got rolled out. There was a lag time between when the products were available, when the whole ecosystem was supporting that technology and when mass adoption occurred.
The products are starting to appear in the marketplace. Through 2009, we'll see more and more uptake, then in late 2009 and more into 2010, we'll see a lot of activity in that space.
What's out there today are some early converged adapters that speak FCoE. There are some early switches. There are even some systems from a storage standpoint that support FCoE for basic interoperability-type testing. So, some of the basic building block pieces are there.
What's missing is a more robust ecosystem where you have multiple switches from multiple vendors all working together, routers from different vendors working with each other, as well as routers from different vendors working with different vendors' enhanced Ethernet products, their regular Ethernet products, their regular Fibre Channel products, their long distance products. What's missing are robust adapters that speak both Fibre Channel, as well as regular Ethernet on a converged adapter to support mixed workloads. What's really missing are storage systems on a broad basis that can speak FCoE, in addition to speaking native Fiber Channel, in addition to traditional iSCSI over Ethernet, as well as NAS.
That combined with all the management tools will be what glues the whole ecosystem together. There's still some work to be done, but things are looking very promising.
This was first published in October 2008