Home » Uncategorized » Infonetics: Systems vendors’ in-house work tips 40G, 100G optical transceiver market/Annual Technology Forecast 2011

Infonetics: Systems vendors’ in-house work tips 40G, 100G optical transceiver market/Annual Technology Forecast 2011

NOVEMBER 1, 2010 — Much has been made of the fact that systems vendors will rely on in-house expertise for the first generation of serial 100-Gbps technology. In its newly updated 10G/40G/100G Optical Transceivers Market Size and Forecast, market research firm Infonetics Research says that this factor will significantly curtail the revenue opportunity for independent 40G/100G module vendors.

Within the report, Infonetics Research increased its forecast for the combined 10G, 40G, and 100G transceiver and transponder market, expecting it to grow to $2.14 billion worldwide in 2014. Within the 40G/100G realm, equipment vendors who offer 100G technology first will take the majority of long-term revenue as well as more 40G contracts, Infonetics asserts. This is because carriers are making vendor decisions based on a dual evaluation of 40G and 100G technology.

“Network equipment manufacturers — such as Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera — are supplying an increasing share of 40G long-reach ports and will ship most of the 100G ports through 2014, posing a competitive challenge to component suppliers in the market,” states Andrew Schmitt, Infonetics Research’s directing analyst for optical. “The reality of this market requires that optical component vendors measure twice and cut once when making investments in this area or face a negative ROI.”

Other optical transceiver market highlights Infonetics cites include:

An increase in WDM equipment spending by carriers around the world resulted in rapid revenue growth for colored optical interfaces in the first half of 2010.
Pluggable tunable transceivers in the XFP format, already popular in ROADM-based networks, allow carriers to add tunability to a wider range of devices — including IP/Ethernet edge switches and routers, and eventually CMTS head-ends, FTTH OLTs, and DSLAMs. The tunable XFP transceiver market is forecast to grow at a 117% compound annual growth rate (CAGR) from 2009 to 2014.
Infonetics expects 10G SFP+ transceiver revenue to more than double and unit shipments to more than triple from 2010 to 2014, led by modules for 10 Gigabit Ethernet (10GbE) and 8G and 16G Fibre Channel applications
The shift to more compact form factors with fewer electronics (SFP+ and XFP) and lower cost designs will rapidly push down unit pricing for 10G components, resulting in flat revenue on increasing volume
The ramp-up of 100G technology is expected to be faster than that of 40G, particularly when 100G coherent PM-QPSK becomes available at a cost no more than double that of 40G coherent PM-QPSK, and when lower cost LR-4 modules hit the market (2011 to 2013).

Infonetics’ 10G/40G/100G Optical Transceivers report is designed to provide in-depth analysis, market size, and forecasts through 2014 for manufacturer revenue and units shipped. The report analyzes the market by module, reach, wavelength, and form factor. The report is the first to use end-market projections of carrier preferences and equipment shipments to drive component forecasts, Infonetics asserts. These forecasts include shipments of 40G and 100G ports designed and manufactured in-house by network equipment manufacturers (NEMs).

These technologies will drive advances in test and measurement, FTTX, equipment design, networking, and MSO optics in 2011.

Optical communications encompasses a variety of technologies. However, one technology clearly received the lion’s share of attention in 2010: 100 Gbps. Equipment designers were developing it, test equipment manufacturers offered new ways to evaluate it, and both carrier and enterprise network managers began to evaluate the economics of applying the technology to their infrastructures.

Looking forward to 2011, 100 Gbps will remain a hot topic of discussion because several of the questions surrounding the technology remain unanswered. But transmitting information at the fastest speed possible isn’t the only way to meet the needs of a given application, particularly when cost enters the equation. Therefore, we can expect next year to see several other technologies either emerge as important development and application drivers or build on the influence they already exert.

What makes a technology influential? How well users perceive it meets (or potentially meets) their communications requirements is the most obvious criterion. How many vendors are developing the technology also is a clear sign of influence. And general buzz can’t be discounted, either.

This article reviews the technologies that we at Lightwave expect will drive several of the most important niches in the optical communications space (all of which we follow in their own topic centers at www.lightwaveonline.com) in 2011.

Our evaluations derive from the research behind the reporting that has graced this publication and our website this year, info jotted in notebooks that is only now seeing the light of day, and a dash of gut instinct. (The staff crystal ball is in the shop for repairs.)

A downturn in the overall economy obviously could effect the technologies we’re about to discuss–boosting interest in some for their cost-saving features and hindering others due to cutbacks in R&D–but with that caveat aside, let’s review what we’ll all be discussing next year.
Test and measurement speeds up

As we just mentioned, many of the technology advances witnessed this year in the test and measurement space derived from 100-Gbps technology development. In particular, optical modulation analyzers arrived that offered the ability to measure coherent signals.

In 2011, we can expect further refinements in these instruments. One obvious avenue is optimizing their use to support development of 400 Gbps and 1 Tbps technology. Most analyzer vendors assert that their offerings are ready for the task. We’ll soon see if this is true.

But these analyzers don’t operate in a vacuum or meet all 100G development test requirements. Oscilloscopes, bit error rate testers, and other instruments also play important roles and will be expected to keep pace with 400-Gbps and 1-Tbps requirements. Work is underway to beef up capabilities in these areas. For example, real-time oscilloscopes are now available that support bandwidths of 45 GHz. We can expect to see such bandwidth offered on multiple channels from scopes in 2011.

Of course, as 100G moves from development to commercialization, high-speed technology test requirements will migrate from the lab to the production floor. We’ve already seen instruments introduced in the latter part of this year that span the applications of technology development, manufacturing, and certification. This trend toward multi-use instruments, frequently enabled via modular platform architectures, should continue in 2011.

And that just covers line-side coherent applications. The ratification of the 40- and 100-Gigabit Ethernet standards also heralds new directions in technology development, primarily based either on WDM or parallel delivery systems. Fortunately, the hurdle to efficient test instrumentation here isn’t speed or exotic modulation formats. Rather, it’s offering efficient measurement of the performance of multiple 10G channels. The trick for test instrument designers, therefore, will be less about the ability to meet the data rate aspects of 40- and 100-Gigabit Ethernet than about doing so economically. Again, instruments are already on the market, but they could be less expensive.
Speeding up the network

Once these high-speed technologies reach the field in significant numbers, carriers and enterprise managers will need test upgrades too. High-speed signal analysis in the form of 40G SONET/SDH and Optical Transport Network OTU3 and OTU4 capabilities are already available for carrier applications. The dispersion analysis tools these carriers will require as they evaluate whether specific links are capable of supporting 40 and 100 Gbps–particularly 40G transmission that doesn’t leverage coherent detection–also are well established.

One potential missing piece of the puzzle is an optical modulation analyzer engineered for field use. Just how much modulation analysis carriers will need to deploy and troubleshoot networks carrying coherent-enabled traffic remains an open question. Equally uncertain is the amount of money carriers would be willing to spend for such capabilities.

But data rates aren’t the only speed issue of interest. Both carrier and enterprise managers want to accelerate test processes and increase the capabilities of technicians who are either few in number or lacking in optical expertise. Next year will continue to see test instruments that enable remote monitoring or communication of test results from technicians in the field to experts back at the operations center. Ease of use will also remain a standing requirement.
FTTX technology looks for use beyond the home

By most measurements, FTTX technology, particularly fiber to the home (FTTH), remains a bright light within the optical communications firmament.

Yet the path toward all-fiber broadband networks still faces hurdles. Verizon has paused its network expansion. AT&T appears quite content with its fiber to the node (FTTN) approach in most markets. In Europe, concerns over fiber access as a means to re-monopolize communications provision has led to increased regulation. Australia’s ambitious National Broadband Network (NBN) plans proved among the most contentious issues in that country’s recent national elections. The party behind the NBN remained in power by one seat in a new parliamentary coalition.

Add the deployment of new technologies that expand the bandwidth-carrying capabilities of copper (see the story in this month’s Analysis section), and FTTH technology becomes as much of a question for the last mile as an answer.

The good news for FTTH technology development support is that–as the NBN, the U.S. broadband stimulus, and National Broadband Plan efforts, and similar activities worldwide suggest–governments globally have determined that providing broadband to as many citizens as possible has become a national imperative. Maintaining competition after these rollouts finish enjoys equal emphasis, which most often means open access.

The immediate impact on technology development, therefore, is the necessity to support open access–the ability to transmit offerings from multiple service providers over the same FTTH network. As many recipients of stimulus funding (particularly municipalities) are new to FTTH, ease of deployment also should drive technology development.

For many parties using government money, these two requirements translate to interest in point-to-point Ethernet (sometimes termed Active Ethernet, which originally referred specifically to point-to-point connections with an active element in the field). Point-to-point Ethernet easily supports open access, since direct connection to homes simplifies the task of delivering different services to different users. Ethernet also is better understood by FTTH newcomers than PON, in most cases, which makes deployment less intimidating.

The recent FTTH Conference North America highlighted this technology trend, as an increasing number of companies offered products in support of point-to-point Ethernet. In fact, some analysts believe the proposed merger of Calix and Occam Networks was driven in part by Calix’s desire to beef up its Ethernet portfolio.

However, BT’s Openreach has demonstrated that you can support open access with PONs, as well. We can expect to see further development of such open access capabilities in PON systems, with efforts to make implementation of such features more turnkey for open access neophytes.
New standards drive the next generation

Meanwhile, standards work on next-generation PON has made 10-Gbps support viable. The IEEE 10G EPON specs are already in place, with ITU/FSAN equivalents for GPON (referred to as XG-PON) expected soon. Carriers have already field tested prototype XG-PON1 (which supports 10 Gbps downstream) and, recently, XG-PON2 (for symmetrical 10 Gbps). However, more development work on these next-generation GPON systems is in order. For example, a source at Huawei, whose technology Verizon was the first to trial, said at October’s SCTE Cable-Tec that he doesn’t expect to have XG-PON1 technology commercialized until the end of 2012.

Another next-generation PON technology, WDM-PON, will rely on support from outside of the IEEE and ITU for now to generate continued technology momentum. The WDM PON Forum held its first meeting at the FTTH Conference North America in September, with how to get back on the standards bodies’ radar screens one point of discussion.

Fortunately for proponents of WDM-PON, the European Commission and others on the Continent have stepped up to fund additional research. Recent projects include SARDANA (designed to extend WDM-PON into the metro arena) and ADVAntage-PON.

New specifications also are in play for the cable company/multiple systems operator space. SCTE specifications for RF over glass (RFoG) are nearly complete. Meanwhile, the first phase of DOCSIS Provisioning over Ethernet, which marries DOCSIS with EPON, should lead to fielding of specification-compliant systems next year.
Equipment design: Big speed, little packages

Next year should see serial 100G technology move from the breathless hype phase to something more rooted in reality. For while 2010 saw the first deployments of coherent-enabled 100-Gbps systems, it didn’t see very many of them. That’s because, as is the case with the first generation of any major new technology, initial systems were expensive and available from only a few suppliers. Next year should see the rectification of the second of these two factors; the rapid decrease in 10-Gbps prices will make the first factor difficult to overcome.

The advent of 100G modules conforming to the Optical Internetworking Forum’s Implementation Agreements should help the cause. However, these are unlikely to become available in volume (whatever that means in the 100G context) any earlier than the end of 2011–and very well could come later.

Nevertheless, many researchers are already looking into what happens after 100G. Based on recent discussions, 400G has emerged as a target technology developers believe they can reach. Nokia Siemens Networks has already announced trials of 200-Gbps technology; one approach to 400G is to use a pair of 200G carriers. Pathmal Gunawardana, head of optical sales in North America for the company, told Lightwave he expects field trials of 400G in the next 12 to 24 months.

Ruminations of what 400-Gbps and 1-Tbps might look like have also affected technology development in other areas. For example, it seems likely that signals greater than 100 Gbps won’t fit neatly into the ITU grid. Since exact channel requirements aren’t known–either in terms of spacing, number, or potential effect on neighboring wavelengths–carriers have begun to demand gridless reconfigurable optical add/drop multiplexers (ROADMs) to avoid having to rearrange their networks for future data rate upgrades. At least two companies, Finisar and Oclaro, have stepped up to the challenge with preliminary gridless wavelength-selective switch (WSS) technologies. But even these pioneers don’t seem to have completely determined the difference between their customers’ ideal and what they will settle for in reality.

Meanwhile, WSS developers will continue to increase multi-degree capabilities. WSSs that support 1:23 were on display at OFC early this year. However, how soon anyone will want more than a handful of these remains unclear.
Finisar liked Broadway Network’s “ONU in an SFP” so much, they bought the company. (Source: Broadway Networks)

Tuning in

Technology such as 100G is sexy, but 10 Gbps is where most networks top out. Since technology development usually follows the money, we can expect continuing innovation at 10G in 2011.

For example, tunable XFPs should increase in prominence now that Oclaro has joined JDSU as a supplier. We can expect other 10G module vendors to follow suit. Other areas of continuing development should include reach extension and improved performance of smaller form factors, such as SFP and SFP+.

As pluggable 10G transceivers increase in popularity and component sizes shrink, we can also expect to see more functions crammed into these small packages. Menara Networks has already made a name for itself with its 10G “OTN in a transceiver,” while one thing that led Finisar to acquire Broadway Networks was the latter’s “ONU in an SFP” product. (Another was Broadway’s self-monitoring technology.)

Regardless of the application, however, lower power, smaller size, and reduced cost will continue to drive design. It’s a bonus that these elements lead to “green” communications; it’s essential that they shrink operational and capital expenditures.
Network managers look to increase bandwidth, lower costs

That’s an obvious statement, isn’t it? Yet the drive to meet rising bandwidth demands while lowering capital and operational expenditures will continue to be the two main drivers of technology deployment and development requirements in 2011–and probably for every year beyond.

On the carrier end, we’ve already discussed 100G. Many analysts (and this publication) believe 40-Gbps technology still has a viable future as an interim solution, at least. A handful of systems vendors (including Fujitsu, Ericsson, and Nokia Siemens Networks) have adopted 40G coherent technology as a stepping-stone toward 100G. Right now Cisco, thanks to the CoreOptics acquisition, is the only company to announce that it’s supplying 40G coherent modules and/or line cards. However, Oclaro (via the Mintera acquisition), JDSU, and Opnext have 40G coherent offerings on their roadmaps, although Opnext appears focused on finish-ing its 100G coherent module before it turns its attention to lower speeds.
Packet transport questions

Regardless of the speed at which traffic will travel, it’s likely to be IP or Ethernet based, and that means packet transport. The relationship between the optical transport layer and the IP/router layer remains a matter of debate; just about every major optical transport and router company has announced an “IP/optical convergence” strategy aimed primarily at rationalizing which layers will control what functions in long-haul core networks. These philosophies are seen as being router-centric when espoused by companies, such as Cisco (which nonetheless has optical transport gear in its portfolio) and optically centric when coming from other companies, such as Alcatel-Lucent (which also has routers in its bag of product tricks).

Either way, these strategies require close cooperation between the optical and IP layers, plus a unified control plane. Each vendor is at a different point in their journey toward this goal, so we can expect technology development, particularly around GMPLS and MPLS-TP, to continue.

However, the confluence of packet and optical transport appears most clearly in packet optical transport platforms. Verizon was an early adopter of such systems in its regional networks (and has made clear they’d like similar systems for the core).

Such platforms are now ubiquitous among systems vendor portfolios. The platforms are expected to become almost as ubiquitous in carrier networks, as well, in particular among Tier 2 and 3 carriers, report sources such as Frank Wiener, vice president of marketing and market development for Cyan. (Wiener has more to say on page 24 of this issue.)
Enterprise data centers boost fiber’s appeal

In the enterprise realm, cabling suppliers expect to see increases in OM3 and OM4 fiber deployments as enterprise network managers come to grips with emerging 10-Gbps requirements, and potentially 40 and 100 Gigabit Ethernet down the road. Some cable suppliers have launched multimode versions of the bend-insensitive fibers now popular in multiple-dwelling-unit FTTH deployments. However, initial enterprise interest in the technology is not robust, suppliers report.

Fiber to the desk remains a hard sell. However, companies such as Motorola and Tellabs have repackaged their FTTH GPON offerings as a way to make campus-wide fiber deployments more economical. Government/military facilities and universities are among the early adopters.

Finally, a desire to support algorithmic trading among their financial services customers led to an explosion of interest among regional networking services providers in low-latency technology. Optical equipment vendors quickly organized offerings to suit. However, many suppliers privately say they view this opportunity as a bubble that will burst when carriers finish reworking their routes between major trading centers.
MSO fiber means business

A former official of the FTTH Council stood in his company’s booth at SCTE Cable-Tec last October and observed that in his council days, he used to debate whether cable multiple systems operators (MSOs) would use GPON, EPON, or some other FTTH technology to support multiple HDTV streams into the home. “Now I know the answer is ‘None of the above,'” he admitted. “It’s going to be HFC and DOCSIS.”

That’s because hybrid fiber/coax (HFC) architectures, thanks to DOCSIS 3.0, show no signs of running out of capacity any time soon. That means you can forget any wholesale transition from HFC to all-fiber access for at least seven to ten years, sources at Motorola offered at Cable-Tec.

The FTTH technology most affected by this fact–not to mention the overall economic downturn–is RF over glass (RFoG). Originally developed as a way to enable MSOs to serve new housing developments whose builders insisted on fiber connections, the collapse of the housing market has similarly collapsed demand for RFoG technology. RFoG isn’t dead–there’s enough demand in other low-density applications to stay in the business, one supplier from a major optical communications company reported–but the technology appears confined to niche status for the foreseeable future. Nevertheless, SCTE should approve specs for RFoG in the very near future.

MSOs have interest in 1550-nm DWDM transmitters to augment their existing fiber infrastructure and increase bandwidth through node splitting. (Source: InnoTrans)

The fiber emphasis now among MSOs for residential applications thus remains on architectures called “fiber deep,” in which the “F” in “HFC” is extended closer to the end user. WDM technology is now being employed to “reclaim” existing fiber installations for increased bandwidth requirements, as well as node splitting.

Where new fiber should see the most play, however, is in support of business services. Interestingly, increasing numbers of these connections could leverage EPON technology, thanks (as mentioned previously) to the first phase of DOCSIS Provisioning over Ethernet (DPoE). Several optical equipment vendors have positioned themselves to play in this area, including Huawei, Cisco, Hitachi Communications Technology America, and, at least preliminarily, Motorola. Allan Wang, director, broadband access, North America network solution sales for Huawei Technologies (USA), believes at least some of the deployments will leverage the ONU on an SFP concept described earlier.

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*

WordPress spam blocked by CleanTalk.