This is a copy of a previous Linkedin Post Dated May 24 2016 which was not present on this Blog.

https://www.linkedin.com/pulse/nfv-independance-from-hardware-lock-in-lutfullah-kakakhel/

NFV is simple. It’s most simplistic distinction is that it is the Telecom Operators name for hardware independence and software dependence. Hardware is locked in while software is more easily changed (a project manager would say: relative to hardware that is).

We can try to see what problem NFV seeks to solve.

Telecom operators faced a dilemma about hardware. “To launch a new network service often requires yet another variety (of hardware) and finding the space and power to accommodate these boxes is becoming increasingly difficult; compounded by the increasing costs of energy, capital investment challenges and the rarity of skills necessary to design, integrate and operate increasingly complex hardware-based appliances.”

The sentence starts with “to launch a new network service often requires yet another variety” (of hardware). Remember they want to compete with the Whatsapp’s and Viber’s of tomorrow and need agility of deployment.

NFV seeks to provide that ‘Agility of Deployment’ of new network services to Network Operators by taking away dependency on proprietary and vendor locked in hardware. That is the high level purpose.

The rest is architecture. Hardware can be any compute(r) node with associated storage (types) and an accompanied (inter)network of such devices.  Then it follows to make virtual services; Virtual Network Functions with Virtual Network Infrastructure.

To roll out software or new software for a new service is easier than to roll out hardware.

Another primary benefit is elasticity in energy consumption. Energy consumption according to demand. With more control of hardware, which is the energy consuming physical device, via dependence on software this is made possible.

Providing Layer 2 VPN and Layer 3 VPN services has been a requirement of enterprises from Service Providers. Similarly Data Center networks need to provide Layer 2/3 Overlay facility to applications being hosted.

EVPN is a new control plane protocol to achieve the above . This means it coordinates the distribution of IP and MAC addresses of endpoints over another network. This means it is has its own protocol messages to provide endpoint network addresses distribution mechanism. In the Data Plane traffic will be switched via MPLS Labels next hop lookups or IP next hop lookups.

To provide for a new control plane with new protocol messages providing new features BGP has been used. So it is BGP Update messages which are used as the carrier for EVPN messages. BGP connectivity is first established and messages are exchanged. The messages exchanged will be using BGP and in them EVPN specific information will be exchanged.

The Physical layer topology can be a leaf spine DC Clos fabric of a simple Distribution/Core setup. The links between the nodes will be Ethernet links.

One aspect of EVPN is that the terms Underlay and Overlay are now used. Underlay represent the underlying protocols on top of which EVPN runs. These are the IGP (OSPF,ISIS or BGP), and MPLS (LDP/SR).  The underlay also includes the Physical Clos or Core/Distribution topology which has high redundancy built into it using fabric links and LACP/LAGs. The Overlay is the BGP EVPN vitual topology itself which uses the underly network to build a virtual network on top. It is the part of the network which related to providing tenant or vpn endpoints reachability. i.e. MAC address or VPN IP distribution.

It’s a new protocol and if you look at the previous protocols there is little mechanism to provide all active multihoming capability. This refers to one CE being connected via two links to two PEs and both links being active and providing traffic path to far end via ECMP and Multipathing. 2 Chassis multichassis lag has been one option for but it is proprietary per vendor and causes particular virtual chassis link requirement limits. Ingress PE to multiple egress PE per flow based load balancing using BGP multipathing is also newly enabled by EVPN.

There is also little mechanism in previous generation protocols to provide efficient fabric bandwidth utilization for tenant/private networks over meshed-style links. Previous protocols provide single active and single paths and required LDP sessions and tunnels for full mesh over a fabric. MAC learning in BGP over underlay provides this in EVPN.

Similarly there is no mechanism to provide workload (VM) placement flexibility and mobility across a fabric. EVPN provides this via Distributed Anycast Gateway.

 

Two edge computing specifications are present.

Facebook OCP’s CG-OpenRack-19 and LinkedIn’s Open19.

They provide for Rack Layouts, Compute, Storage and Networking.

Networking for CG-OpenRack-19 is copied below. The servers sleds in the pictures appear to be single homed as per the colors. It would be interesting find out which protocol handles the Active Active state of the multi homed Compute and Storage Sleds if that is at all present.

OpenRack-19

Given here

Open19 gives 100G bandwidth capabilities and some details are on its website.

5G’s edge ultra low latency requirements would could require edge solutions and it would be interesting to see how things play out ahead.

This also brings to mind SD-WAN because these edge racks will be at least connected in a large WAN.

Google’s B4 is one of its software defined inter data denter WAN solution. Google’s Espresso is its peering edge solution. Espresso links into B4 domain via B2. This link has the details of Espresso as shared by the Google team.

 

Google-Espresso-B4.JPG

Google is not employing an army of networking engineers to run these because they are software defined and programmed bots will probably be doing operational tasks. To operate this network there are Site Reliability Engineers though.

Here is one public job advertisement that relates as to what an SRE is expected to be like:

We have reliable infrastructure and can spin up new environments in a couple of hours. Automate everything so there is more time for exploring and learning. Foster the DevOps mindset

What are our goals?
  Internationalisation
  Deploying multiple data centers
  Deploying every 5 minutes
Requirements
  Experience with Java or JavaScript in a Dockerised environment
  Linux Engineering/Administration
  Desire for improving processes
  Have a passion and most importantly, a sense of humour
Tech Stack (you DO NOT need experience in all of these)
  Kubernetes + Docker
  Terraform + Ansible
  Linux
  Kotlin + NodeJS
  ELK stack
  AWS

This is obviously an SRE for the servers side and the application enablement side of things. If there is a large software defined edge network like Espresso and a large Edge-to-DC network like B2 and a large software defined inter-DC network like B4 you will need a different SRE.

Here is Google’s version of a Site Reliability Engineer Job.

Job description
Minimum Qualifications

BS degree in Computer Science or related technical field involving coding (e.g. physics or mathematics), or equivalent practical experience.
3 years of experience working with algorithms, data structures, complexity analysis and software design.
Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby.

Preferred Qualifications

Systematic problem-solving approach, coupled with effective communication skills and a sense of ownership and drive.
Interest in designing, analyzing and troubleshooting large-scale distributed systems.
Ability to debug and optimize code and automate routine tasks.

About The Job

Hope is not a strategy. Engineering solutions to design, build, and maintain efficient large-scale systems is a true strategy, and a good one.

Site Reliability Engineering (SRE) is an engineering discipline that combines software and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems. SRE ensures that Google’s services—both our internally critical and our externally-visible systems—have reliability and uptime appropriate to users’ needs and a fast rate of improvement while keeping an ever-watchful eye on capacity and performance.

SRE is also a mindset and a set of engineering approaches to running better production systems—we build our own creative engineering solutions to operations problems. Much of our software development focuses on optimizing existing systems, building infrastructure and eliminating work through automation. As SREs are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to solve a broad spectrum of problems. Practices such as limiting time spent on operational work, blameless postmortems and proactive identification of potential outages factor into iterative improvement that is key to both product quality and interesting and dynamic day-to-day work.

We can see that Google’s SRE Job Ad is all software along with large scale distributed systems requirements.

Now if we note this extract from the Wikipedia SD-WAN article:

“With a global view of network status, a controller that manages SD-WAN can perform careful and adaptive traffic engineering by assigning new transfer requests according to current usage of resources (links). For example, this can be achieved by performing central calculation of transmission rates at the controller and rate-limiting at the senders (end-points) according to such rates”

and we also note this extract:

“As there is no standard algorithm for SD-WAN controllers, device manufacturers each use their own proprietary algorithm in the transmission of data. These algorithms determine which traffic to direct over which link and when to switch traffic from one link to another. Given the breadth of options available in relation to both software and hardware SD-WAN control solutions, it’s imperative they be tested and validated under real-world conditions within a lab setting prior to deployment.”

We see Algorithms.

Its clear that there are different algorithms running these Software Defined networks (Google’s software defined Espresso, B2, B4 and Jupiter). These algorithms automate, kick in and optimize. Google becomes a large scale distributed system with various algorithms here and there. While Software Architects and Software Engineers will have developed these algorithmic nodes and programmed them into network devices/servers an SRE is the human who will operate the system. A team of SREs.

One aspect of Networking protocols is that they are for a multi-vendor, multi-enterprise and multi-domain environments. They provide simple consensus to connect two or more different network devices.

To take a merchant silicon network device like OCP’s Wedge and OCP style servers and make one large network like Google out of it will require software engineering to remake the NOS (Network Operating Systems) part at least. There will be atleast a Meta-NOS, somewhat running on top of a typical NOS which would handle the SDN – software defined algorithms. In addition to the SDN controllers talking to this Meta-NOS. Multiple layers of SDN controllers will be talking to each other and you can call this a network protocol or an SDN algorithm but it will be part of distributed systems software architecture and it will be programmed in place by software engineers.

Large Scale Distributed System on Merchant Silicon Hardware – Software Defined Meta-NOS – SDN Controllers – Hierarchical SDN Controllers – Algorithms.

Sounds like a Program Management task instead of PMP scale Engineering Project Management task. You will need Mathematicians to sit with Network Architects, Distributed Systems Architects and Software Architects. The Mathematicians will do give the algorithms. They will be important too.

Fun times.

Terabit scale networking requires better Consensus.

Autonomous Networks and Autonomic Networking can be renamed as solving Consensus Dynamics.

Wikipedia States (Nov’ 2018):

“Consensus dynamics or agreement dynamics is an area of research lying at the intersection of systems theory and graph theory. A major topic of investigation is the agreement or consensus problem in multi-agent systems that concerns processes by which a collection of interacting agents achieve a common goal. ”

To note again it is an ‘intersection of systems theory and graph theory’.

Lets not forget that mathematically communications networks are Graphs. An OSPF/ISIS network is a weighted directed graph where the costs & metrics are the weights, the network devices are vertices and the ethernet L2 links are directed edges.

Furthermore, to note again that ‘ a collection of interacting agents achieve a common goal’. In networks the common goal can be to enable end to end, host to host connectivity over a vast network. TCP and UDP.

Interesting times ahead for Terabit scale networks. Keeps the fun alive in network engineering.

References:

https://en.wikipedia.org/wiki/Consensus_dynamics

https://cs.fit.edu/~msilaghi/papers/EncyclopAI07.final.pdf

 

 

 

Simple NAPALM Use – A Python based abstraction layer multivendor support capable. Part of screen scraping solution.

Ansible ConfigMgmt / Jinja2 Templates – Part of a CLI automation system solution which can be called sophisticated screen scraping. via SSH. No on-device agent or service.

Salt / NAPALM Logs – Event driven network automation. This is also a CLI automation solution and can be part of screen scraping from network device perspective. Event driven by NAPALM logs.

Netconf or Restconf with YANG – Connectivity is via Netconf/Restconf (JSON/XML) while configuration is via YANG data modeling available on device (which is a service on device).  Not screen scraping or CLI automation as YANG is a data modeling language providing service and can be used to extract and push state at device.

SDN based Cisco ACI like: Northbound Rest API on APIC controller and Southbound OpFlex with an OpFlex agent on device. On device Policy Element abstraction service.

Good reference links for exploring of the above:

Ansible + Jinja2 Option:

https://networkotaku.wordpress.com/2017/10/24/network-configuration-templates-with-ansible-and-jinja2/

Salt + NAPALM Abstraction Option:

https://ripe76.ripe.net/presentations/17-RIPE76_-Event-driven-network-automation-and-orchestration.pdf

https://mirceaulinic.net/2017-10-19-event-driven-network-automation/

https://my.ipspace.net/bin/list?id=xNetAut181#SALT

Links for Restconf + YANG Option:

https://networkop.co.uk/blog/2017/02/15/restconf-yang/

https://networkop.co.uk/tags/ansible-yang/

https://packetpushers.net/podcast/pq-show-116-practical-yang-network-automation/

Links for SDN Style Cisco ACI – Like:

https://wiki.opendaylight.org/view/OpFlex:Opflex_Architecture

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-731302.html

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide/b_Cisco_APIC_REST_API_Configuration_Guide_chapter_01011.html

 

This post is not in English. 🙂

L2, ETH, M ADD, SRC/DST, M TABLE, ARP TABLE, GARP, VLAN, BCAST DOM, MTU, PMTUD. AppPMTUD, JUM FRAMES, 1500, 9000.

STP, ROOT Port, BPDUs, block, listen, learn, forward, disable, RSTP, MSTP, TCN BPDUs,

Swi, Hub, Rtr, Gway,

IP, SUBNET, ETH TY, SRC ADD/DST ADD, TTL, IP Add(Net:Host), CIDR

MPLS, LDP, LSP, LBL, EXP, PHP, FRR

TE, RSVP, LBL ST, LSP. Backup LSP via MPLS TE.

L3VPN, ROUTE, PE-CE , LBL Stack, VPNV4, MPBGP, AFI/SAFI L2VPN, RD,RT, VRF

VLL,L2VPN, VPLS, ELAN, TunnMTU,

OSPF, NBMA, MCAST, AREA, DR, BDR, Areas, LSDB, Rte Sum/Rte Fil b/w areas, Stub (No AS Ext LSA) , TotStub(No Sum, No Ext) , NSSA (Type 7 Ext Transit) , LSA, NET LSA, RTR LSA, EXT LSA, ASBR, ABR, SUMM LSA, ASBR LSA, ASBR SUMM LSA, SPT, HELLO, DEAD TIME, RTR ID,

BGP, TCP 179, TELN 179, PING, FW 179, NHRI, ROUTE, COMMUNITY, B PATH, LNG PREFIX M, WEIGHT, LPEF, ASPATH, IGP/EGP/LOCAL, PREPEND, BlaHol COMM, NHOP, PIC, PREFIX FIL, AS PATH FILT, COMMUN RPL LPREF Actions,

RR, CLUSTER, IBGP, B PATH, vRR

RIB,FIB, AD DIST,

ACL, DENY, ALLOW,

ASIC, CEF, RP, MEM, LC, CBAR FABRIC, LINE RATE, CUT THROUGH.

DC, CLOS, LEAF, SPINE, IGP, IP, LOOPBACK, BGP, ROUTE, VXLAN, MACinUDP, VRF,
MPBGP, EVPN, MAC-ROUTE, MAC VRF, EVI, ESI, SDN, BGP LS, PCEP, BDR LEAF, L3OUT, VRF, MPBGP, PE-CE, DC L2OUT, BD, VLAN, MTenant, BRI DOM, END POINT, 1-MAC/+IPSubnets.

TMETRY, PULL, GRPC L4, SNMP PUSH.

ANSIBLE, NETCONF/YANG, APIs, YAML, WSPACE, HUMreadable, XML/HTML, JSON, JINJA2, PYTHON.

TCP, SYN, ACK, SYN ACK, ECN echo bit, FIN, RST, SL WDOW, PORT, SEQ, P BIT/ImmSend, MSS, RELIA, Ordered.

UDP, CLESS, SPORT, DPORT, Length, checksum. DNS, DHCP, SNMP, Jitter, latency VoIP, unordered, mcast, bcast.

DNS LB, GSS, GLB, ANYCAST GWAY.

DHCP DORA, IP, BCast, Unicast.

DCI, L2/L3, EoMPLSoGREoIP, L2VPN, VPLS, EVPN, MP-BGP VRF.

SAN, SYN/ASYN, latency, DWDM/CWDM, iSCSI, FCoIP.

Workload Mobility, MobIP, LISP, ProxyIP.

QoS, DiffServ, MPLS EXP, E-LSP, L-LSP, per Class DiffServ Aware MPLS-TE vi RSVP Sig.

Linux, Expect, BASH, Python, AWK, SED, GREP, CRON, VIM, NANO.

Scri, D-Types:No, Stri,List (mutable),Tuple (immutable), Dict (key,value), Variables, Arrays(C), List-Stack (LIFO), List-Queue (FIFO).

Cond Prg, If, Ifelse, ElseIf, NestIf, While (Condition True) , For (Iterations known), Break, Continue, For Else, built-in functions, User-def-functions. Library, Framework, local vari, global var.

 

Layer 1, Layer 2, Layer 3 and Layer 4. Physical, Link Layer/MAC Layer, Network Layer, Transport Layer.

Physical is Physics which is improving allowing more bandwidth limits.

Layer 2 sitting atop physical involves Links/Mediums access control mechanisms between devices. It provides bits data transfer over the physical connectivity and involves payloads/addresses.

Layer 3 involves connecting multiple networks and forming an internetwork which further provides network level end-to-end connectivity.

Layer 4 involves end to end host level connectivity.

Within Layer 2 we have software enabled Virtual LANs, we have loop avoidance via Spanning Tree, we have Link aggregation via LACP etc.

Within Layer 3 we have IGP,EGP,VPN,SP,DC,WAN,TE, QoS and what not.

Within Layer 4 we have TCP,UDP; Connection-oriented/Connection-less; Flow-control, windowing; Reliability, acknowledgements, sequencing; Error control, checksum; Port numbers, etc.

Layer 2 and Layer 4 are relatively ‘localized’. Layer 2 due to its physical/link level vicinity and layer 4 due to its in-host & between-host proximity. While there is science in these layers it is somewhat local.

Layer 3 involves much geography. It is the domain which deals with providing end-to-end connectivity spanning much space and area. With this comes much management. It entails reachability across multiplexed systems via addressing, reliability via multi-pathing, reachability status communication, path preferencing in multipath options, path avoidance, virtual privacy and isolation across multiplexed systems, time management for timely fault tolerance and fault bypass. geolocation based path selection and load balancing. etc. etc.

Hence the birth of large-scale Internetworking Protocols.

Protocols which are engineered to have some have mechanisms built-in & agreed upon while some options require configurations.

Autonomous Networks which have all the mechanisms built-in and require no configurations are not present at the moment, except perhaps somewhere inside Google et al.

For now we have to sift and select between options and configurations for making data flow.