No bandwidth bottlenecks a modern leafspine clos design, using centralized sdn control designed to take the network from spine to leaf to vswitch and avoid bandwidth bottlenecks. A leaf spine topology can be layer 2 or layer 3 depending upon whether the links between the leaf and spine layer will be switched or routed. Building a universal cloud using nway multipathing via mlag multichassis link aggregation groups at l2 or ecmp equalcost multipathing at l3 is a standards based and scalable approach for. To start off, each rack gets its own aggregation switch. In layer two leaf spine implementations, the spf protocol has been replaced by other protocols such as trill or spb. Consistent performance, subscription and latency between all racks. We are excited to work with broadcom to deliver more agile, performant, and efficient switching solutions to the industry based. On the data path performance of leafspine datacenter fabrics.
A modern architectural approach that enables networks to scale to the level of cloud giants, such as aws. In my quest to really understand sdn, ive been reading a number of research papers and watching presentations by industry researchers. Ive found that the traditional threetiered network design core, distribution, access has been replaced with designs that can provide full bandwidth i. In a layer 2 leaf spine design, transparent interconnection of lots of links or shortest path bridging takes the place of spanningtree. Leafspine architecture is adaptable to the continuously changing. In leaf and spine topology, when servers are connected to different. To work around this issue, a solution such as software defined. Treebased topologies have been the mainstay of data center networks. For networks that are designed to handle higher loads in and out of the data center, obviously you can wind up with architectural bottlenecks when traffic patterns change. As shown below, the leaf spine design only consists of two layers.
The former tracks the specific demo we are doing for ons, which includes the leafspine topology on dell switches with automated te of elephant flows. Jun 14, 2017 now, with the trident 3 generation, we see the opportunity for massmarket adoption of 25100gbe based leafspine interconnect leveraging a fully programmable switch data plane. Moving to four 10g channels in leaf spine architecture introduces a new concern. A pod is a selfcontained unit of compute, network and storage. The spine layer is made up of switches that perform routing, working as the backbone of the. If the answer is yes, you dont need intraspine links at least not for user traffic.
Businesses can design costeffective, agile networks for the modern era by adhering to these three constructs. Latency the amount of time it takes for a packet of information to travel from point a to point b increases because the pipes are split into smaller lanes. The architectural unity is the guide to the solution unity. The first option is to extend the spine, adding more spine nodes for additional layer 2 or layer 3 forwarding to form a switched fabric data center, which can be viewed as a single expanded fabricpath based pod. Leafspine hardware configurations can be dynamically created, assisted by software. A new data center design called the clos networkbased spineandleaf architecture was developed to overcome these limitations. In layer two leafspine implementations, the spf protocol has been replaced by other protocols such as trill or spb. These advantages to leafspine architecture are unique to this new type of hardware configuration and setup. The mesh ensures that all the leaf switches are no more than one hop away from each other, minimizing latency links within a leafspine can be either switches layer 2 or routed layer 3. Leaf spine can be layer two or layer three routed or switched. Clos architecture offers a nonblocking architecture based on. Software defined networking vs webscale networking.
Leafandspine and network virtualization architecture. The other model a traditional threetiered model was designed for use in general networks, usually segmented into pods which constrained the location of devices such as virtual servers. Logical network design flexibility the pod design accommodates unique nfv workloads with unique logical network requirements that share the same physical leafspine. Ase2 based new fabric module for nexus 9500 85 the new fabric module is built with ase2 asics continue to use an internal clos architecture with fabric modules at the spine and line cards at the leaf each nexus 9500 switch needs up to 4 ase2 based fabric modules each ase2 asic on a fabric module provides 32 x 100ge internal. What is leafspine architecture and how to design it fs. The leaf and spine fabric architectures webinar describes the clos architecture concepts used to build leaf and spine architectures, and single and multistage designs that can be used to build large layer2 or layer3 allpointequidistant data center networks. To be in sync, remove this page from mseg lets leave this until we understand why it should be removed. Device configurations had to be hardcoded into devices. Leaf spine for easttowest traffic flow clos leafspine architecture consistent any toany latency and throughput consistent performance for all racks fully nonblocking architecture if required simple scaling of new racks benefits. Distributed core architecture using the z9000 and s4810 core. But some design considerations for leaf spine architecture should be considered, which will be introduce in the next part. The latter tracks the generic segment routing application which will be used for the fabric, but can also work for other networks with different topologies. Clos based networks including fattree and vl2 are being built in data centers, but existing perflow based routing causes low network utilization and long latency tail. Well, as the title of this section says, the majority of vendors have modeled there product line around the concept of a clos fabric or what many term the leafspine model.
Spineleaf topology the last option is to instead build out the spine leaf fabric with n9ks configured by aci, running the aci os. This analysis may be more abstract or related to the principles that the solution needs to support. We use the network simulation cradle 16 package to port the actual tcp source code from linux 2. There are a few limitations of the leafspine design. The internal architectural unity classifies outputs deliverables, artifacts, building blocks based on roles, architectural design, representations, and relationships. Table 1 shows the s erver host ports realized at different design points in the msdc reference architecture. Leaf spine network architecture is catching up with large data centercloud networks due to its scalability, reliability, and better performance.
It has been quite educational and i thought that it would be useful to share the references. Sep 24, 2014 modern data centers primarily consists of of thousands of racks of servers. Leaf spine architecture basics and design guideline. The bcf architecture consists of a physical switching fabric, which is based on a leafspine clos archi.
Nov 09, 2016 platform and product architectural responsibility within ibm cloud and cognitive software. We study the leafspine architectures performance via high. Now, with the trident 3 generation, we see the opportunity for massmarket adoption of 25100gbe based leafspine interconnect leveraging a fully programmable switch data plane. As virtualization, cloud computing, and distributed cloud become more pervasive in the data center, a shift in the traditional threetier networking model is taking place. Why is clos spineleaf the thing to do in the data center. The juniper networks virtual chassis fabric vcf provides a lowlatency, highperformance fabric architecture that can be managed as a single device. Next generation nexus 9000 architecture linkedin slideshare. May 30, 2016 ase2 based new fabric module for nexus 9500 85 the new fabric module is built with ase2 asics continue to use an internal clos architecture with fabric modules at the spine and line cards at the leaf each nexus 9500 switch needs up to 4 ase2 based fabric modules each ase2 asic on a fabric module provides 32 x 100ge internal. For that reason, new architectures are emerging that capitalize on the predictable performance and triedandtrue leafspine architectures also called clos. Adaptive switching granularity for load balancing with. Jul 28, 2017 compared to the traditional 3tier architecture, the leaf spine architecture design drastically simplifies cabling needs, especially when looking at fiber optic connectivity. He invented it in order to optimize the architecture of telephony network systems back then. May 21, 20 clos fabric aka leafspine ok so we now know the pitfalls of the hierarchical design.
Traditional datacenter architectures are based on a threetier architecture which consists of access distributin core switches. Many data center network will employ a multitier architecture based on a layer. It was just last september that we unveiled big cloud fabric the industrys only baremetal sdn fabric to bring hyperscalestyle network design and its benefits to. The arista universal cloud network campus design guide is based upon common use cases seen from real customers. The other great thing that leafspine networks promote simply by the nature of the topology, is commoditized infrastructure in terms of both features and hardware. M4300 spine and leaf with more than 8 switches im upgrading our networking infrastructure using m4300 switches which i was planning on installing in a spine and leaf topology. Most traditional network designs have been based on a 20year architectural premise, based on a ntier hierarchical topology see. Aug 05, 2019 the internal architectural unity classifies outputs deliverables, artifacts, building blocks based on roles, architectural design, representations, and relationships. Leaf spine definition of leaf spine by merriamwebster.
Leaf spine design based on clos architecture stackguy. It was not used in ip based network for last few decades but it. Modern data centers primarily consists of of thousands of racks of servers. Once fabricpath is introduced into a basic topology, additional options can be used to expand the data center. Many data center network will employ a multitier architecture based on a layer 3 fat tree design or clos network using ecmp. Clos fabric aka leafspine ok so we now know the pitfalls of the hierarchical design. The spineleaf architecture provides a strong base for the software defined data. Leafspine can be layer two or layer three routed or switched. Sep 04, 2014 hierarchical designs consist of three network layers. Dynamic routing allows the best path to be determined and adjusted based on responses to network change. Aug 10, 2017 the leaf spine topology clos network wide ecmp. Leafspine network architecture is catching up with large data centercloud networks due to its scalability, reliability, and better performance. Verizon launches industryleading large openstack nfv deployment. Spine and leaf vs traditional hierarchical architecture.
Tolly evaluated a lab deployment of ten z9000 switches with 4 serving as the network spine and the remaining 6 as leaf nodes. So, basically what im saying is that i believe a leafspine topology promotes a god box free data center im a big fan of that. The leafandspine fabric architectures webinar describes the clos architecture concepts used to build leafandspine architectures, and single and multistage. The latter tracks the generic segment routing application which will be used for the fabric, but can. Platform and product architectural responsibility within ibm cloud and cognitive software. Moving to four 10g channels in leafspine architecture introduces a new concern. This topology is an example of a clos tree, providing maximum nonblocking bandwidth between leaf switches. Distributed core architecture using dell z9000 and s4810 switches page 4 conventional data center architecture conventional data center architecture is a layered approach that is comprised of the core, aggregation. A leafspine topology can be layer 2 or layer 3 depending upon whether the links between the leaf and spine layer will be switched or routed. Aristas suite of leafspine or spline designs for middle and end of row storage and compute clusters delivers an open universal cloud network. Jul 22, 2016 the former tracks the specific demo we are doing for ons, which includes the leaf spine topology on dell switches with automated te of elephant flows.
Leaf spine definition is a spine as of the barberry developed from a leaf instead of from a branch. There are at least two scenarios where the leaf switches wouldnt have complete visibility into the fabric topology and could send the traffic to the wrong spine switch. In cacti, spines are wholly transformed leaves that protect the plant from herbivores, radiate heat from the stem during the day, and collect and drip condensed water vapour during the cooler night. Unicast or multicast uniform reachability deterministic latency high redundancy on node or link failure spinespine spine spine leaf leafleaf leafleaf leaf leaf clos, charles 1953 a study of nonblocking switching networks 31. Hierarchical designs consist of three network layers. For largersize switching, a threestage clos network, based on crossbar nodes, is a viable architecture. Distributed core architecture using the z9000 and s4810. Distributed core architecture using dell z9000 and s4810 switches page 4 conventional data center architecture conventional data center architecture is a layered approach that is. The trick is to get a network connection to all of them. Research, development, and design of emerging networking technologies. The mesh in a leaf spine network requires that all access layer switches are but one hop away from each other. The ucn campus design guide shows a set of solutions, features, and applications that are leveraged to meet the customers demands. Introduction to spineleaf networking designs lenovo press. In a layer 2 leafspine design, transparent interconnection of lots of links or shortest path bridging takes the place of spanningtree.
As shown below, the leafspine design only consists of two layers. Mar 23, 2015 leaf spine architecture is adaptable to the continuously changing needs of companies in big data industries with evolving data centers. Understanding webscale networking cumulus networks. Softwaredefined data center and whats the way to do it what is cisco aci. This design guide provides information concerning arista networks technology solutions. Under spine and leaf design all leaf switches have connection to all spine switches.
207 450 664 36 591 984 1595 1411 1040 110 1347 325 791 1309 528 770 1411 569 603 1609 1526 370 35 678 1182 6 1521 734 1257 449 1345 1091 224 1009 477 81 63 250