Comparison of the construction costs of DWDM networks versus SDH networks

The construction problem of multi-layer network design for a high-speed telecommunication network based on Synchronous Digital Hierarchy (SDH) and Wavelength Division Multiplex (WDM) technology, has to carry a certain set of demands with the objective of minimizing the investment in the equipment.

It all start with the design phase - where the effective costs of either optical SDH or DWDM networks has to be competitive.

Design process
A network design method proceeds by generating cycles, evaluating the economics of building rings on those cycle, and building any economic rings. Generating a cycle involves picking two endpoints between which two disjoint link and node paths are desired — the two nodes selected are thus nodes on the candidate rings.

In a telecommunications network, a “ring” is a sequence of nodes arranged in a “cycle” so that no node is repeated. The “links” between nodes are places where fiber can be placed. Nodes are generally physical locations such as buildings where fiber bundles can be connected to each other and where equipment such as multiplexers, amplifiers, regenerators, transponders, etc., can be placed. Ring design entails in part the making of decisions as to ring placement, i.e., which nodes and which links are to be included. Ring design also concerns the selection of equipment, i.e., what types and rates of multiplexers, amplifiers, regenerators, transponders, etc., and where to place the equipment. Finally, ring design necessarily entails decisions as to what demand to place on the rings.

The models used for SONET/SDH provide for the following costs and parameters:

  1. frame and installation,
  2. regeneration loss thresholds,
  3. maximum number of SONET ADMs on a ring, and
  4. fiber material, sheath installation, and structure expansion cost.

Currently, Dense Wavelength Division Multiplexing (DWDM) is being installed largely on long-distance routes. The DWDM vintage normally used is point-to-point DWDM, or in other words, DWDM systems are utilized as fiber concentrators. The reason for this equipment being so prevalent for long-distance carriers is simple economics: DWDM can substantially reduce capital investment because of the ability to multiply the number of signals being carried by each fiber and thus avoid expensive cable or route upgrade and also save the cost of multiple regenerators.

DWDM is multiple signal transmit over a single fiber called DWDM or Different frequencies (colors/wavelengths/lambdas) for different connections over the single fibre. Full featured DWDM equipment can comprise the same range of cards as SDH. They can support fully configurable cross connect features. DWDM technology provides very high bandwidth long haul inter-connect links. DWDM is considered as one of the best technologies to increase bandwidth over an existing fiber plant. It enables one to create multiple “virtual fibers” over one physical fiber.

The DWDM layer is protocol and bit rate independent, which means that it can carry ATM (Asynchronous Transfer Mode), SONET, and/or IP packets simultaneously. WDM technology may also be used in Passive Optical Networks (PONs) which are access networks in which the entire transport, switching and routing happens in optical mode.

The differences between demand types mostly are caused by the design efficiency of these two technologies’ interface cards in terms of density and price.

Cost from IP - Internet Protocol - traffic approach
IP traffic is growing exponentially as customers migrate to IP-based applications. As these networks evolve to include bandwidth-intensive IP based voice, video, and data services, carriers must boost capacity in response to demand, knowing that the collected revenue will not scale at the same rate. Therefore, carriers must find ways to optimize the operating and cost efficiency of service networks and drastically reduce costs per bit.

Traditionally this was implemented using (IP) Internet Protocol over SDH approach which has the inconvenient of the optical to electrical to optical (OEO) conversion at the aggregate interfaces. The IP over DWDM is practically implemented as connection between DWDM router interfaces with an optically switched DWDM layer.

Don't look for the right product - find it - only here in GBIC-SHOP. Stop by now!

25G SFP28 Cable - Best for TOR Server Connection?

During the past few years, there is a dramatic increase in the demand for bandwidth requirement for our communications.  Whether be in a communication service provider or in a public or private data centers, a development in connectivity that can cater higher speed, bandwidth is needed. That is why last July 2014, an industry consortium was formed to create a new Ethernet connectivity standard in data centers. This standard was called 25 Gigabit Ethernet or 25Bade-T, developed by IEEE 802-3 task force P802.3by. This standard was derived from the 100Gbe, however, its operation works as a four 25Gbps that are running on four fibers or coppers. Last June 2016, this technology was commercially released using new interfaces called SFP28 and QSFP28. This article will discuss about the SFP28.

The SFP28 was constructed in a four parallel 25Gpbs data lanes allowing a maximum rate of 100Gbps. This physical structure of the SPF28 is the same with the popular SFP and SFP+. This characteristic provides flexibility due to the fact that the 100Gbps can also be divided individually in to four 25Gbps connections. SFP28 uses a 28Gbps lane (25Gbps + error correction) specifically used for top-or-rack (TOR) switch to server connectivity. Moreover, SFP28 is available in both copper and fiber optic cables.

The copper cable version is manufactured in a single fixed-configuration module which means the copper cables are directly attached to an SFP+ module. This version is ideal to be used for short distances ranging from 1m to 5m. On the other hand, the optical fiber version functions in either an 850nm that utilizes a pair of multimode fiber and works to a maximum distance of 100m or in a 1320nm that is made with a pair of single mode fibers works up to 20km.

The development of 25G SFP28 has provided a wide range benefits especially in a web-scale data center environment where the trend is to toward a single port server due to cost.

Primarily, it gives way to efficiently utilize data and switch port density. The reason for this is that, existing 100G port can be used as a 4x25G with as QSFP to SFP28 break out cable instead of using for different ports. For example, a 25Gbe strand can provide 2.5 times more data than the popular 10G solution and can provide greater port density.

Moreover, it provided an extremely efficient increase in speed to server to top-of-rack(TOR) especially when using the Direct Attached Copper assembly. It also simplifies development of interoperability specification and system due to the fact that its backward compatibility and gives an easier upgrade path from an existing 10G ToR server configuration.

Furthermore, using 25G SFP28 for ToR servers are more economical. This is because it can provide higher port densities, fewer ToR switches and cables are needed. It allows a more cost-effective alternative top-of-rack server connection that uses point-to-point patch cords. It enables End of Row(EoR) or Middle of Row (MoR) by using the 30-meter structured cabling. As a result, it reduces the capital expense in the construction cost compared to other configurations like the 40GbE.

Ultimately, the 25G SFP28 assemble features a reduced power and smaller footprint requirements for data centers because it limits the power per port to under 3W.

Due to this benefits that the 25G SFP28 assembly provides, it is forecasted that it will be popular in the years to come. It is believed that the dominant next generation server connection is toward the 25Gbps speed in server and in the near future, there will be more equipment that will use the 25G SFP28 cable assembly.

How Does Attenuation Affect My Fiber Optic Network?

Fiber optic networks are networks where the transmission of data is done with the help of optical transceivers and optical cables. The optical transceivers transmit an optical light down an optical cable. As in the case with standard Ethernet copper networks, optical networks are also influenced by exterior stress and interior properties and as a consequence some power is loss. This optical power loss is called Attenuation.

Fiber optic cables consist of fiber optic glass core and cladding, buffer coating, Kevlar strength components and a protective exterior material called a jacket. Depending on the optical cable type. These components can vary in size and strength. Unlike the copper cables which use electricity to transmit data, fiber optic cables use pulses of optical light for the same function. Their core is made of an ultra-pure glass which is surrounded by a mirror like cladding. When the light hits the cable it travels down the core constantly bouncing of the cladding until it reaches the final destination. There are two types of optical cables, Multi-mode and Single-mode. From the outside they look almost the same, however their interior plays a huge role in the optical attenuation. Single-mode fibers are used for a long range, high speed connections because of their tighter core and cladding which improve the light transmission by limiting the light bouncing of the cladding. Multi-mode fibers have larger core thus the light will bounce more and more power will be lost until it reaches the destination.

However, the optical attenuation of optical fibers is not only the lost power due to the core of the cable. High optical attenuation can be caused by absorption, scattering and physical stress on the cable like bending. Signal attenuation is generally defined as the ratio of optical input power to the optical output power. As the names suggest, optical input power is the power injected in the optical cable by the optical transceiver, and optical output power is the power received by the transceiver at the other end of the cable. The unit of attenuation is described as dB/km.

Absorption is one of the biggest causes for optical attenuation. This is defined as the optical power lost due to the conversion of the optical power into another form. Absorption is typically caused by a residual water vapors. Generally absorption is defined by two factors:

  • Imperfection in the atomic structure of the fiber material
  • The extrinsic and intrinsic fiber-material properties which represent the presence of impurities in the fiber-material
  • The extrinsic absorption is caused by impurities like trace metals, iron and chromium, introduced into the fiber during the manufacturing process. These trace metals are causing a power loss during the process of conversion when they are transitioning from one energy level to another.
  • The intrinsic absorption is caused by the basic properties of the fiber material. If the optical fiber material is pure, with no impurities and imperfections, then all absorption would be intrinsic. For example in fiber optics silica glass is used due to its low intrinsic absorption at certain wavelengths ranging from 700nm to 1600nm.

Scattering losses are caused by the density fluctuations in the fiber itself. These are produced during the manufacturing process. Scattering occurs when the optical light hits various molecules in the cable and bounces around. Scattering is highly dependent on the wavelength of the optical light. There are two types of scattering loss in optical fibers:

  • Rayleigh scattering- this scattering occurs at commercial fibers that operate at 700-1600nm wavelengths. Rayleigh scattering occurs when the size of the density fluctuation is less than 1/10 of the operating wavelength.
  • Mie scattering- this scattering occurs when the size of the density fluctuation is bigger than 1/10 of the operating wavelength.

Bending the fiber cable also causes attenuation. The bending loss is classified in micro-bends and macro-bends:

  • Micro-bends are small microscopic bends in the fiber which most commonly occur when the fiber is cabled
  • Macro-bends on the other hand are bends that have a large radius of curvature relative to the cable diameter.

Another type of optical power loss is the optical Dispersion. Optical Dispersion represents the spreading of the light signal over time. There are two types of optical dispersion:

  • Chromatic dispersion which is spreading of the light signal resulting from the different speeds of the light rays
  • Modal dispersion which is spreading of the light signal resulting from the different propagation modes of the fiber

Modal dispersion is most commonly limiting the maximum bit rate and link length in Multi- mode fibers. The Chromatic dispersion is the main culprit for the attenuation in Single-mode fibers.

Having this in mind we should always consider, test and calculate the possible attenuation of the fibers for deploying a stable network capable for future upgrades.

Don't look for the right product - find it - only here in GBIC-SHOP. Stop by now!

What is MPLS?

Since the Internet is a network that grows exponentially during the last decades, it comes to a point that a certain technology cannot handle the demand in the number of users or the data that are traversing in between networks. Furthermore, there is also a dramatic increase in terms of connection speed, newer applications have been developed like multimedia, voice, video and real-time e-commerce applications. Due to these innovations, there is a great need for new technologies which cannot only provide larger bandwidth capacity but also guarantee a reliable Quality of Service and an optimal performance that can only use minimal network resources.

Multi-Protocol Label Switching or popularly known as MPLS is a technology that enable service providers or Enterprises to offer a wide variety of additional services for their clients over a single infrastructure. This technology offers many advantages, primarily, it enables ISPS to have a better control on their growing network because of the traffic engineering, network convergence, and failure protection capability of MPLS.  Moreover, it is an economical approach because it can give different pricing scheme based on QoS.  Finally, it can be overlaid with existing technologies like Frame Relay, ATM (Asynchronous Transfer Mode), Ethernet or IP in a seamless manner.  Due to the efficiency, scalability and security that MPLS can give to customers as well as ISPs, it is now one of the widely used protocol in the internet.


MPLS is networking technology that operates in between Layer 2 (Data Link) and Layer 3 (network) of the OSI model, that is why it is sometimes called the Layer 2.5 Networking Protocol.

It’s a very popular core networking technology that has been around for several years, this technology uses labels to packets and forward them through the network.

Forwarding packet or frames by using labels is not new, as a matter of fact, this technique has been used by Frame Relay and ATM. Both of these technologies use “label” in their header and utilize a Private Virtual Circuits from end users traversing through hubs sites and creating a mesh topology, however, each label in their header is changed in every hop.

In the OSI model Layer 2 uses protocols like Ethernet, VLAN, etc. that carries IP packets only into a simple Local Area Network. Layer 3 on the other hand, uses internet-wide addressing and routing via IP protocols. MPLS is between these layers providing additional features for the delivery of data across a network. In a traditional IP routing, routing protocols are used to distribute Layer 3 routing information. Each router executes basics steps in packet delivery; firstly, it performs IP look-up on their routing table to determine their next hop and then forward the packets to that next-hop. These processes were repeated for every router in the network until the data reached its final destination. Moreover, forwarding is only based on destination address and routing look-up is performed in every hop. Thus, every router will also need a full internet routing information and will consume a lot memory in its system.

In MPLS, it will use “label switchingand the first device is the only one to do routing look-up. However, instead of finding its next-hop, it will find its destination router and its pre-determined path from its location to the final router.

How MPLS works?

An MPLS network comprises of Label Switch Router (LSR), these routers understand how an MPLS works therefore they are the one who labels and transmits labeled packet into and out of the MPLS network. LSRs have three kinds, namely:

  • Ingress LSR - a Provider Edge (PE) or Label Edge Router (LER) is the first router in the MPLS network that is responsible for the insertion label or “shim” at the header of the packet and forward it on the network. It faces the Customer Edge (CE) routers to which the data comes from.
  • Egress LSR – a Provider Edge (PE) or Label Edge Router (LER) is the last router in the MPLS network it receives the labeled packet at the far end, removes label and send it to a Customer Edge (CE) router.
  • Intermediate LSR – a Provider(P) or intermediate router, are transit routers that receive incoming labeled packets, perform operation in switching labels and then forward these packets to the correct data link inside the MPLS network.

Here below is a simple MPLS network that shows basic members:

The Three Major Functions of LSR are listed below:

  • POP – remove the label off for outgoing packets
  • PUSH – if an LSR receives unlabeled packet it creates a label stack and push it into the packet. On the other hand, if an LSR receives a labelled packet, it pushes one or more label into the label stack and switches out the packet
  • SWAP – An LSR will swap labels which means when it receives a labelled packet, it replaces or simply swap the top of the label stack with a new label.

Another significant concept of how an MPLS works is the Label Switch Path or LSP, technically it is the unidirectional network-wide tunnel or path between the LSRs inside the MPLS network. In addition, this LSP is very important in MPLS forwarding mechanism.

Furthermore, there are two routing protocols in MPLS network that are popularly used in the industry. First is the Label Distribution Protocol or LDP, it is a simple non-constraint protocol where LSRs obtain label information from their neighbor, meaning it will this is the process that allows faster lookup and addressing. Second is the Resource Reservation Protocol with Traffic Engineering or RSVP-TE, this protocol is a more complicated due to the fact that it supports traffic engineering and uses more over overhead. However, this protocol allows MPLS to efficiently allocate and maximize the utilization of bandwidth in the network as well as to prioritize traffic and avoid congestion.

In order for an MPLS network to function, one must also need to know about Forward Equivalence Class (FEC), Label Information Base (LIB) and the Label Forwarding Information Base (LFIB).

            FEC, is a group or flow of packets that are forwarded in the same LSP and has the same treatment in terms of forwarding and this also determines how packets that have the same label belongs to the same. This is because the forwarding treatment might be different and belongs to a different FEC. In particular, the Ingress LSR classify and assign packets to which FEC it belongs.  Below are some classifications of FECs:

  • IP prefix/ Host Address
  • Layer 2 circuits (ATM, Frame Relay, PPP, HDLC, Ethernet)
  • Group of addresses/sites-VPN
  • Bridge/switch instances-VSI
  • Tunnel Interface-Traffic Engineering

LFIB, is the table that routers used to forward labeled packets it consist of incoming and outgoing labels for an LSP. These outgoing and incoming labels have binding that are being provided by the LIB.  Incoming labels are from local binding in a particular LSR while the outgoing labels are from remote bindings chosen by the LSR from all possible remote bindings.  However,  the LFIB will choose only one of these outgoing labels from all the remote  binding in the LIB. The chosen remote label depend on which path is the best that is found in the routing table.

Therefore, all the directly connected LSR must establish an LDP or peer relationship between them so that they can exchange label mapping messages across this LDP sessions. A label mapping or binding is bound to FEC.  The FEC is set of packets that are mapped to certain LSP and are forward over that LSP through the MPLS network. Several varieties of protocols to distribute labels.

To summarize how MPLS works, when all LSRs have the labels for a particular FEC and LDP had already been established or mapped inside the MPLS network, Initially, when a customer edge (CE) router forwards a data or a packet inside an MPLS network, the ingress LSR performs a routing look-up, will be the first to assign label to the packet, look into its Label Forwarding Information Base (LFIB) and then forward it according to its bounded LSP, its service class and destination.  Secondly, inside the MPLS network, when an intermediate PE receives this labeled packet, it will do label look-up, label swapping and forwarding. When a packet transit to several Pes the label is swapped at each hop.   Finally, when the packet reaches the egress LSR, it strips off or “Pop” the label then a routing look-up is used to forward the packet out of the MPLS network.

See Illustration below:

  • MPLS labels can also be stacked multiple times, the bottom of the stack indicates the last label in the stack while the top most label will be the one to control the delivery of the packet. When it reached its designated PE, the top-most label is “popped” and the second label will take over and direct the packet to the next destination.
  • In some cases, it is not necessary that the ingress LSR of a certain LSP is the first router to assign a label to a packet. The packet might be labeled, this case is called nested LSP, which means an LSP inside an LSP. Thus, will also be a case of label stacking.


  • Traffic Engineering

One of the benefits of using MPLS is, it allows traffic engineering within the MPLS domain. Traffic engineering is an operation in which data traffic are routed in such a way to balance the traffic load on different links, routers, switches within the network. By mapping to predefined LSPs, MPLS can maximize the utilization of available bandwidth, prioritize traffic, to avoid congestion or bottle-necking, fast rerouting and capacity planning.

MPLS-Traffic Engineering works by using the RSVP-TE to allocate bandwidth in the network. As mentioned before, the LSP is a “tunnel’ between networks and under RSVP, every LSP has an associated bandwidth value in it. If this bandwidth is available, the LSP is signaled across a set of links. This is important in re-routing traffic in a congested network, this is because when using a constrained routing, RCVP-TE utilizes the shortest path with available bandwidth to carry a particular LSP.

  • Quality of Service (Qos)

Qos is the over-all performance of a network, it is a guaranteed performance level of the network backed by a Service Level Agreement (SLA) on several metrics like Throughput, Packet Loss, Jitter and Latency. Unlike IP networking which is a connectionless, best-effort delivery and treats all traffic that are traversing within a link with the same treatment regardless of importance, MPLS is a connection oriented protocol. Since MPLS is in between L2 and L3 of the OSI it maintained the connection oriented QoS Legacy of L2 technologies. This is a major benefit that MPLS can provide, it can perform Traffic Engineering with efficient allocation of available bandwidth and at the same time providing Qos by differentiating the importance of the different types of traffic that are traversing in the network.

  • Fast Re-route (FRR)

Another advantage of using MPLS is Fast Reroute, it is an extremely fast convergence mechanism in case of a link failure. FRR allows traffic to be rerouted to back-up LSP with a rapid fail over time of 50ms or less. Compared to a normal IP network where on-demand calculation of best path happens after a link failure which can take several seconds, MPLS FRR best path calculation takes place before a failure actually occurs.  In an MPLS FRR, back-up routes are installed into the routers Forward Information Base (FIB) awaiting to be activated. Moreover, routing loops will not occur during the transition.


VPN or Virtual Private Network, is a technology that enables a private network to be incorporated across a public or a shared infrastructure. It interconnects geographically separated sites in the public network for the private users in able to communicate as if their devices are directly connected to a private network with the same privacy and security.

MPLS VPN is widely used and offered by ISP to enterprises to interconnect their remote offices.  Due to the self-reliant forwarding that MPLS use. MPLS VPN can be utilized with different customer edge equipment as well as to operate at L@ and L# of the OSI Layer.

At Layer 2, MPLS VPN uses pseudo-wires or Virtual Leases Lines (VLL) which emulate point-to-point circuit, VLL can be utilized to interconnect different types of media like Ethernet to Frame Relay. In addition, MPLS VPN also uses VPLS or Virtual Private LAN Service which creates multi-point switching service that is used to interconnect large number of customer’s endpoints into a single broadcast domain emulating the function of an L2 switch. These technology is also used to avoid of a full mesh L2 circuit.

At Layer 3, MPLS VPN uses VRFs of Virtual Routing and Forwarding which are established at the customer edge routers. This technology allows multiple protected routing instances of a routing table to co-exist with the same router at the same time.  Customers are placed within this VRF and exchange routes with the provider router in a different routing instances. VRFs are unique to each VPN, thus, other VPN that are using the network are transparent to each other as well as other CE devices.

Don't look for the right product - find it - only here in GBIC-SHOP. Stop by now!

Overview of Networking Protocols of Transceivers

The communication between any two end-points is based on some standard language which can be understood by both. In information technology world, these standards are called protocols. A protocol is a networking language which is used to transmit and receive the message between two communicating devices. Each protocol has its own way of sending and receiving the signal which is referred to as encapsulation, but in the end, it is all down to bits; 0s and 1s.

In this article, we will discuss few of the most widely used networking protocols which can be used to transmit and receive the messages on transceivers. Data rates and applications of these protocols will also be discussed to enable us to understand the basic working principles of the transceivers that are available in the market.


Ethernet is the most extensively used networking technology. Ethernet is a multi-layer protocol spanning over the physical and data link layer in the OSI model. Ethernet finds its application in every network regardless of its size and scale. From small offices to large government enterprises, ethernet can be seen at work everywhere. Ethernet was first introduced in 1980 and standardized by IEEE (The Institute of Electrical and Electronic Engineers) in 1983. The original version of ethernet was designed to carry 2.94 Mbps of data, since the initial version, ethernet has seen rapid evolution and development. These days the ethernet can be utilized for 100 Gbps data rates.

Ethernet is widely deployed in Local Area Networks (LAN) and Metropolitan Area Networks (MAN). Unshielded twisted pair (UTP) copper cable is used for shorter distances and fiber optic cable is used for longer distance. Ethernet is independent of the carrying medium and the protocol’s structure remains same for both fiber optic and copper cable transmissions.

Transceivers such as SFP, SFP+, GBIC, QSFP, QSFP+ and CFP etc. support the ethernet traffic to be transmitted and received. A wide range of ethernet supported transceivers is available at CBO-IT.

Fibre Channel

Fibre Channel (usually abbreviated as FC) is a technology used for high-speed data transfer. Fibre channel finds its core use in storage area networks (SAN). FC is used to transfer data between computer storages and computer systems or servers. Introduced in the year 1997, FC provided a throughput of 200 Mega Bytes per Second (MBps), now a days, fibre channel can provide data transfer speeds of up to 25600 MBps. FC is a widely used technology, almost all of the high-end servers and storages that are available today, have interfaces to support FC.

Another variant of fibre channel is Fiber Channel over Ethernet (FCoE). FCoE uses ethernet as a transport medium, FC packets are encapsulated over the ethernet network, thus providing data transfer speeds equivalent to the ethernet network’s speed.

Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH)

SONET and SDH are technically two names for a single technology; the only difference being the data rates; that was developed to replace the older Plesiochronous Digital Hierarchy (PDH). SDH transmits multiple digital bit-streams synchronously over the fiber optic cable. SDH finds its primary application in long distance optical networks which are used to transmit large amounts of diverse data. Telephone calls and digital data can be carried over a single cable without causing interference or synchronization issues by using the SDH technology. Due to this feature, SDH is also widely used in telecommunication networks.

SONET was standardized by Telcordia and American National Standards Institute (ANSI) as T1. 105 standard. SONET provides data rates in the range above 51.8 Mbps. SONET is widely used in North America and Canada.

SDH was developed as a standard by the European Telecommunications Standards Institute (ETSI) and is formalized as International Telecommunication Union (ITU) standards G.707, G.783, G.784 and G.803. SDH is the prime protocol used by the European world. The basic data rate of SDH is 155.52 Mbps.

SONET and SDH are mainly used over fiber optic, but for shorter distances, electrical wire can also be used for signal transmission.


InfiniBand, usually abbreviated as IB is a computer-networking communications standard which is principally used in the high-performance and high-speed computing. The main feature of InfiniBand is the very high throughput, very low latency, very high stability and very high reliability. InfiniBand can be used as either a direct, or switched interconnect between servers and storage systems, as well as to connect different storage systems that require exceptionally high bandwidths. In the direct interconnect mode, two end-points are directly connected to each other and in the switched interconnect mode, an InfiniBand switch is placed between the two connecting end-points. Data rates that can be achieved by using InfiniBand technology can be as high as 290 Gbps. Research and development on increasing the speeds further is in progress.

Other than the above mentioned protocols and technologies, there are several other protocols that are in use. Some are proprietary protocols which are developed by equipment manufacturer to be used only in some specific equipment while others are standardized protocols available for general usage. The information technology industry is striving hard to continue the evolution of protocols which provide faster data transfer rates and high level reliability.

Don't look for the right product - find it - only here in GBIC-SHOP. Stop by now!


Page 8 of 21