Archive

Cloud Computing

I listened to the recent Day Two Cloud Packet Pushers podcast. The team discussed AWS Outpost and EC2 At the Edge. It was interesting to find out about AWS products where they ship pre-built racks to local regions nearer to the customer. The racks have equipment used to run some of the regular AWS services and are fully managed by AWS. These local racks appear as usable in the AWS console.

Latency is a significant concern for some applications. Customers vying for low latency services can deploy AWS Outpost and EC2 at the Edge to significantly reduce latency.

As usual, interesting times ahead in technology.

Link to the podcast:

Day Two Cloud 180: Understanding AWS EC2 At The Edge

Cost of SaaS and SaaS Operating Income should always be the main concern when making an investment in SaaS or buying SaaS. Power, People, Vendors, Equipment. That sums up the game pretty much. Power as in Electricity cost per unit, people as in cost of manpower, vendors and equipment as in the manufacturers and organisations providing you the equipment.

Two edge computing specifications are present.

Facebook OCP’s CG-OpenRack-19 and LinkedIn’s Open19.

They provide for Rack Layouts, Compute, Storage and Networking.

Networking for CG-OpenRack-19 is copied below. The servers sleds in the pictures appear to be single homed as per the colors. It would be interesting find out which protocol handles the Active Active state of the multi homed Compute and Storage Sleds if that is at all present.

OpenRack-19

Given here

Open19 gives 100G bandwidth capabilities and some details are on its website.

5G’s edge ultra low latency requirements would could require edge solutions and it would be interesting to see how things play out ahead.

This also brings to mind SD-WAN because these edge racks will be at least connected in a large WAN.

Google’s B4 is one of its software defined inter data denter WAN solution. Google’s Espresso is its peering edge solution. Espresso links into B4 domain via B2. This link has the details of Espresso as shared by the Google team.

 

Google-Espresso-B4.JPG

Google is not employing an army of networking engineers to run these because they are software defined and programmed bots will probably be doing operational tasks. To operate this network there are Site Reliability Engineers though.

Here is one public job advertisement that relates as to what an SRE is expected to be like:

We have reliable infrastructure and can spin up new environments in a couple of hours. Automate everything so there is more time for exploring and learning. Foster the DevOps mindset

What are our goals?
  Internationalisation
  Deploying multiple data centers
  Deploying every 5 minutes
Requirements
  Experience with Java or JavaScript in a Dockerised environment
  Linux Engineering/Administration
  Desire for improving processes
  Have a passion and most importantly, a sense of humour
Tech Stack (you DO NOT need experience in all of these)
  Kubernetes + Docker
  Terraform + Ansible
  Linux
  Kotlin + NodeJS
  ELK stack
  AWS

This is obviously an SRE for the servers side and the application enablement side of things. If there is a large software defined edge network like Espresso and a large Edge-to-DC network like B2 and a large software defined inter-DC network like B4 you will need a different SRE.

Here is Google’s version of a Site Reliability Engineer Job.

Job description
Minimum Qualifications

BS degree in Computer Science or related technical field involving coding (e.g. physics or mathematics), or equivalent practical experience.
3 years of experience working with algorithms, data structures, complexity analysis and software design.
Experience in one or more of the following: C, C++, Java, Python, Go, Perl or Ruby.

Preferred Qualifications

Systematic problem-solving approach, coupled with effective communication skills and a sense of ownership and drive.
Interest in designing, analyzing and troubleshooting large-scale distributed systems.
Ability to debug and optimize code and automate routine tasks.

About The Job

Hope is not a strategy. Engineering solutions to design, build, and maintain efficient large-scale systems is a true strategy, and a good one.

Site Reliability Engineering (SRE) is an engineering discipline that combines software and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems. SRE ensures that Google’s services—both our internally critical and our externally-visible systems—have reliability and uptime appropriate to users’ needs and a fast rate of improvement while keeping an ever-watchful eye on capacity and performance.

SRE is also a mindset and a set of engineering approaches to running better production systems—we build our own creative engineering solutions to operations problems. Much of our software development focuses on optimizing existing systems, building infrastructure and eliminating work through automation. As SREs are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to solve a broad spectrum of problems. Practices such as limiting time spent on operational work, blameless postmortems and proactive identification of potential outages factor into iterative improvement that is key to both product quality and interesting and dynamic day-to-day work.

We can see that Google’s SRE Job Ad is all software along with large scale distributed systems requirements.

Now if we note this extract from the Wikipedia SD-WAN article:

“With a global view of network status, a controller that manages SD-WAN can perform careful and adaptive traffic engineering by assigning new transfer requests according to current usage of resources (links). For example, this can be achieved by performing central calculation of transmission rates at the controller and rate-limiting at the senders (end-points) according to such rates”

and we also note this extract:

“As there is no standard algorithm for SD-WAN controllers, device manufacturers each use their own proprietary algorithm in the transmission of data. These algorithms determine which traffic to direct over which link and when to switch traffic from one link to another. Given the breadth of options available in relation to both software and hardware SD-WAN control solutions, it’s imperative they be tested and validated under real-world conditions within a lab setting prior to deployment.”

We see Algorithms.

Its clear that there are different algorithms running these Software Defined networks (Google’s software defined Espresso, B2, B4 and Jupiter). These algorithms automate, kick in and optimize. Google becomes a large scale distributed system with various algorithms here and there. While Software Architects and Software Engineers will have developed these algorithmic nodes and programmed them into network devices/servers an SRE is the human who will operate the system. A team of SREs.

One aspect of Networking protocols is that they are for a multi-vendor, multi-enterprise and multi-domain environments. They provide simple consensus to connect two or more different network devices.

To take a merchant silicon network device like OCP’s Wedge and OCP style servers and make one large network like Google out of it will require software engineering to remake the NOS (Network Operating Systems) part at least. There will be atleast a Meta-NOS, somewhat running on top of a typical NOS which would handle the SDN – software defined algorithms. In addition to the SDN controllers talking to this Meta-NOS. Multiple layers of SDN controllers will be talking to each other and you can call this a network protocol or an SDN algorithm but it will be part of distributed systems software architecture and it will be programmed in place by software engineers.

Large Scale Distributed System on Merchant Silicon Hardware – Software Defined Meta-NOS – SDN Controllers – Hierarchical SDN Controllers – Algorithms.

Sounds like a Program Management task instead of PMP scale Engineering Project Management task. You will need Mathematicians to sit with Network Architects, Distributed Systems Architects and Software Architects. The Mathematicians will do give the algorithms. They will be important too.

Fun times.

An automated multi-tenant data center network is an increasingly desired end goal for large and small organizations including providers. Servers that house the CPU, RAM and Hard Disk resources are serving traffic for applications they host. These servers need connectivity among themselves within the data center and also towards the outside world.

At first an organized set of CPU/RAM/HD Servers are connected to a network device. This happens in a Data Center rack and the network device is a ToR, a Top of Rack switch.  Another similar set of servers is connected to another Top of Rack network device. Multiple such sets of servers/network device pods are then linked together. The incumbent way to do this would be to make a leaf-spine Clos fabric. The layer of network devices connecting the servers are the leaf layer and the layer of network devices that is connecting these leaf nodes is the spine layer.

Hardware is thus laid out in a 2-stage or 3-stage Clos fabric and then we need to lay out a logical control plane to pass traffic. Applications on the Server CPU/RAM/HD will talk to each other within the DC which is east-west traffic or to the outside world which can be called north-south traffic.

Depending on the type of application east west traffic could be higher but north south traffic is always present.

Moving bits from a server to any other location is the networks job. These bits could be a compute hosting virtual machine’s bits or a ‘Serverless’ cloud application’s bits but they  go somewhere and are moving. They are moved by the network layer regardless of what resides on the servers.

How many layers of protocols and software are required to provide for an automated multi-tenant data center network which can connect servers, host applications and provide east-west/north-south connectivity ?

In the Networking Components blog post some basic networking components were listed out in a different construct: Network Device, Protocols, Protocol Messages, Addresses, Lookup tasks, Identity Tags, Filters & Actions, Network Over Network ( Overlay) Appended Information, Network + Network , Network Inside Network Device, Control and Data Plane.

In the Event-Driven Network Automation blog automation details were described.

The below will make some use of the networking components and event-driven network automation blog posts.

At first you need Addresses appended onto payload bits to ascertain endpoints and exchange traffic. How many layers of addresses will be required to connect the servers to each other over a fabric? In a full mesh structure the networking layer is small/direct and less addresses are required. In a Clos Leaf-Spine-Leaf fabric there needs to be multiple layers of addresses required.

A packet/frame structured bits data structure is switched across multiple nodes. In terms of Addresses Ethernet MACs are used for Layer 2 connectivity between servers NICs and ToR ports. The server could also have an IP Address of its own and be performing Layer 3 communications.

One server connected with one leaf could send an IP packet to another server connected with another leaf (Server<>Leaf<>Spine<>Leaf<>Server). As parts of the Control Plane of laying out the fabric the leaf and spine network devices will have IP addresses of their own which will speak to each other and send Control Plane Protocol Messages. What this infers is that there will be present 2 layers of IP communications. One between the network nodes themselves and one between the servers. This infers the requirement to have an IP address pushed on to another IP address in a tunnel type structure where from one network device to another (e.g. leaf to leaf via spine) the packet is routed based on Outer IP Addresses and the inner address is used by the server. Therefore some packets will require an addressing structure such as IP|Eth|IP|Eth. The IP Tunnel will span from a Leaf to another Leaf via the Spine, therefore the tunnel endpoints are at the Leaf switches. (Server-IP<encapsulation>Leaf-IP<>via Spine <>Leaf-IP<decapsulation>Server-IP)

We have multiple combinations of communications to deal with in multiple layers of the networking stack.  Leaf-Local L2, Leaf-Local L3, Leaf-Spine, Leaf-Spine-Leaf L2, Leaf-Spine-Leaf L3.  All this calls for multiple domains. A ‘Local’ Link Layer Domain, A Local Network Layer Domain, A Distant Network Layer Domain, A relatively distant Link Layer Domain. A link layer domain could be an L2 VLAN/broadcast domain or a bridge domain and a network layer domain could be a local VRF or a wider-spanning IP-in-IP domain level routing instance.

… A routed layer has IP addresses at two endpoints and an Ethernet link has MAC addresses at two end points. A virtual machine of a tenant in a server can have both an IP address and a MAC address. There could also be a single virtual machine having multiple subnets IPs behind the same MAC address ethernet link. This virtual machine is an endpoint and is this is what the network layer needs to provide connectivity to. Therefore we could say that an endpoint requires at least 2 tables at the network device it is connecting to. An IP Routing table and a MAC table. An ARP table is also required for Inter-Layer discovery. There is also the Leaf-Spine-Leaf IP-in-IP tunnel we spoke about which adds another layer of overlay Routing Table. In addition an outer IP to inner IP socket-style mapping function will be required which is another table (L4 Socket of Outer IP to Inner IP).

Discovering the places of destination-address lookup-actions happening in a network always helps discover the kind of networking happening.

So a Leaf-Local L2 frame (a server sends to another server connected to the same leaf) would be switched locally with the local bridge domain/mac table. A Leaf-Local L3 packet would be routed by the local VRF. A Leaf-Spine-Leaf Packet would be mapped to the relevant far-end leaf tunnel endpoint and a tunnel endpoint IP would be pushed on it; it would then be tunneled/IP routed across the spine to the destination leaf; the destination leaf would then look at the socket-style mapping table of the destination endpoint; it would then pass onto the final destination endpoint.

While the Leaf-Local communications can be handled within the network device by tables, mappings and local lookups, it is obvious that when crossing the spines and reaching for a far end leaf there is a need for a control plane to communicate the far end addresses and mappings. A Protocol to exchange the distant leafs addresses and mappings which establishes the control plane for traffic to be switched and routed between leafs across the spines. There is a spine in the middle and the leafs are not directly connected. A Control Plane to distribute addresses and mappings.

There is a choice here.

For this Leaf-Spine-Leaf addresses exchange & inner/outer mappings population we could use a distributed, nuke-tolerant, internet style packet layer protocol OR instead use an SDN style central controller to do the thinking and push/program the network devices with all the addresses and mappings. The devices need to be populated with far end addresses and mappings and both will achieve this goal.

Our topic is an Automated Multi-Tenant Data Center Network and the automation part of the name is supported by the SDN style.

Why ?

The reason is that any distributed, nuke-tolerant, internet style protocol inherently requires independent configurations on all networking nodes which then enable the devices to start communicating. While an SDN controller is a single configurations point which pushes the configs onto the devices. This means that from an automation standpoint you will be either automating the configurations of hundreds of devices or an SDN controller. Configuring all devices in a large data center fabric independently is difficult to automate while managing the automation of an SDN controller or even levels of SDN controllers is easier.

This Data Center will need to speak to the outside world too.

This means that there will be a border functionality which will provide L3 and L2 reachability to the outside world. i.e. Ethernet L2 connectivity; VLANS or bridge domains, extended from a server in a leaf to a border node leaf and onwards to an outside world L2 construct, say an MPLS L2VPN.

Similarly an L3 VRF extension  where a set of routes of an endpoint/server/tenant are stretched onto a border node leaf’s VRF via say MP-BGP style RD/RT mechanism where they are further extended onto an outside world MPLS L3VPN via a PE-CE routing protocol. (Tenant-Routes|VRF|MP-BGP|VRF| VRF-PE <> CE|Outside World).

Our topic also contains the word Multi-Tenant which means that in the case of L3 Multi-tenancy a legacy MPLS L3VPN style VRF/MP-BGP mechanism will be needed per tenant per VRF.

A similar mechanism is required for connecting two or multiple such large data centers between themselves. So “Endpoint<>Leaf<>Spine<>Border-Leaf<>|Infra-Link|<>Border-Leaf<>Spine<>Leaf<>Endpoint” communications are then possible. For L3 VRFs/MP-BGP can provide separation and ensure multi-tenancy for this Inter-DC comms.

When required some border leafs will obviously connect to routers which speak eBGP to the outside world. Other Autonomous Systems over Transit and Peering connections. These routers will have the global routing table and will gateways to the rest of the world. The PE-CE communications mentioned above for VRF stretching can be Static/OSPF/EIGRP/BGP.

To run this Automated Multi-Tenant Data Center lets not forget an overarching Orchestration software residing atop it providing a GUI mechanism into the wide array of options, tools, clogs and combinations to enable tenants Intra-DC, Inter-DC and outside-world communications.

To gain an understanding of components that make up networks we’ll start by stating that a network is a combination of tools working together to provide connectivity to endpoints.

Let’s list the tools.

Network Device (Switch and Router and others) – This is a device which terminates multiple cables into itself with the other end of the cable being other devices. The network device interconnects multiple endpoints via its ports on which cables terminate.

Protocols – These are tools which provide for a coordination mechanism. This coordination mechanism is an exchange of information which makes possible the exchange of traffic.

Protocol Messages – These are messages exchanged between Protocols while they coordinate the laying of the network foundations for exchange of traffic.

Addresses – These come in many flavours and are intended to identify the source and destination of a data payload which is traversing the network.  They can be layered/structured for aggregation and division pools.

Lookup – This is done on the various addresses to find the next hop. Lookup is done to find the next point to which to send the data payload to so that it reaches its ultimate destination after traversing the network.

Appended Information – This is a general term which encompasses information traversing the network which is other than payload and addresses. These are information and tools which are put into packets for protocol operations. This is information inside headers other than the addresses.

Identity Tags – This is a specific class of Appended Information which provides for identity functionality during a lookup and for identification and separation of protocol functions.

Filters & Actions – These are deployed on the network devices to provide intelligent selection and resulting actions over the traversing data payload. They utilize the addresses and appended information inside the data payloads and also the headers.

Network Over Network – This is a general term for a network on top of a network for provision of separate connectivity. A combination another layer of protocols and addresses result in a network over a network.

Network + Network – This is a term identifying the interconnection of 2 or more separate networks resulting in a larger network. Also called internetwork it signifies one domain interconnected to another domain.

Control and Data Plane – Control Plane is the network protocols laying the network foundations and data plane is the traffic traversing the network. Control Plane enables Data Plane.

Network Inside Network Device – This is a term signifying the division of a network device to facilitate a software separation in networks. It creates separate networks inside a network device via operating system software constructs.

We can put brands on these:

OSPF/ISIS/BGP are Protocols to lay the Control Plane for IP addresses

LDP is the Protocol to lay the control plane for MPLS addresses (labels)

MAC Address / IP Address / MPLS Labels are addresses and Lookups are done on them during Data Plane operation

MPLS L2 VPN / MPLS L3 VPN are a Network Over Network function based on labels.

MP-BGP is a protocol to lay Control Plane for Network over Network (MPLS L2VPN & EVPN)

AS to AS BGP connectivity is a Network + Network function

Route Maps / Prefix Lists / AS Path Lists are part of Filters and Actions

OSPF Areas and ISIS Levels are a Network domain + Network domain layering type function

QoS Diffserv and CoS are appended information for actions and functionalities

EVPN, OTV & VXLAN are Network over a Network options. These provide a network over a network Control Plane and network over a network Data Plane.

VXLAN VNID / VLAN TAG / Route Target / Route Distinguishers / BGP Communities are Identity tags for protocol operations where they aid the control plane or data plane.

VDC / VRF / EVPN EVI are Network inside Network Device features primarily being operating system software constructs.

This is a rough approach with much simplification but is intended to view the various network components as tools providing functionality working in unison for connectivity provision. This view aids looking at the components from a Design perspective.

Whether it is a Service Provider, Enterprise or Data Center / Cloud IaaS network the components interact and provide functionality.

This posts focuses on the trend of Microservices and the various related terminologies and trends. In the end it lists the brands in their categories.

An application is software. It is composed of different components. These are the application components. Together they make up the application. The difference between one application software component and another application software component is one of separation of concerns. This is simply dividing a computer program (the application) into different sections. If the different components are somewhat independent of each other they are termed loosely coupled.

The different components of an application communicate with each other. When they need to interact with each other they do it via interfaces. A client component does not need to know the inner workings of the other application software component and uses only the interface.

This is where the word service comes into play where what one application software component provides to another software component is called a service.

Now this application may be placed on a distributed system where its different components are located on networked computers. Thereafter in terms of an application running on a distributed system, SOA or Service Oriented Architecture is where services are provided to other software components over a communications protocol over a network.  This is due to the underlying hardware being networked and distributed in nature and the application software on them being distributed on it.

In terminology of Distributed Systems when when one of its components communicates with another component they do this via messages. We can say that in a distributed system, an application’s software component sends a message to another software component to utilise its service via an interface and that interface is also utilising a network protocol.

We now know about an Application which is a software program, its components and that services are provided by its components. We now know about Distributed Systems, its components networked together and messages being passed between them over a network. We know about applications running on distributed systems where application software components are running on components of the distributed system. We know the application software components communicate with each other via a network.

In Microservices a distributed systems component is running an applications software component and is providing a service. It’s a process now in execution mode. So one software component is placed and is running on one distributed system component and is providing a service from there to other similar independent components.

A normal process is a running software program in execution mode. Inter Process communications are IPCs in terms of processes. In Microservices IPCs will be network messages.

What we discussed above earlier is the application software architecture and its transition into the distributed systems environment. When you say that each independent software component is now running, is a process, it is running on a distributed systems components and the Inter Process Communications are over a network you have Microservices. These Microservices form an Application.

Furthermore, in Microservices there is a bare minimum of centralized management of different services and they may be written in different programming languages and use different data storage technologies. So we can have one software component written in Go, and another in NodeJS and they will provide each other services. These services will also be over a network. So a Go software component can be running on one distributed system component and a NodeJS software component can be running on another distributed system component and they will interact via the network composing the distributed system. Multiple such distributed software components providing services to each other make up a Microservices Application.

A container provides an environment to run a microservice component. A container is a distributed system object which can be termed loosely as a distributed system hardware+software components service.

In terms of branding:

Amazon AWS is a Distributed Systems Provider.

EC2 is Amazon AWS’s product to provide a distributed system compute component online.

S3 is Simple Storage Service, a product for simple storage of files by Amazon AWS online.

DynamoDB is Amazon AWS’s NoSQL Database product which available as a product online.

Golang and NodeJS are programming languages in which backend server side software components are written.

React is a programming language in which frontend user side application software components are written.

Docker is a software which provides for individual container management. One container provide the environment where a software component can be executed on a distributed system.

Kubernetes and Docker Swarm manages multiple (lots of) containers deployed on distributed systems for running a distributed application. They are for containers management.

RabbitMQ and Kafka work as message brokers for passing messages between microservices

RESTFul HTTP APIs are also a means for intermicroservice communication.

Protocol Buffers and GRPC are means of faster intermicroservice communication messaging.

MongoDB and Couchbase are NoSQL databases which can be run in containers and be utilised by application software components for Database purposes.

Git is an application software component version control system

Promethues is an application (software) to be run (can be in containers) built specifically for the purpose of monitoring microservices software component health (metrics)

Grafana is an application (software) to be run (can be in containers) for the purpose visualizing metrics/health of microservices.

ELK stack which is ElasticSearch, Logstash and Kibana are softwares which provide for logging of events and their search and visualization.

https://en.wikipedia.org/wiki/Component-based_software_engineering

https://en.wikipedia.org/wiki/Event-driven_architecture

https://en.wikipedia.org/wiki/Service-oriented_architecture

http://www.d-net.research-infrastructures.eu/node/34

https://martinfowler.com/articles/microservices.html

https://en.wikipedia.org/wiki/Process_(computing)

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45406.pdf

 

Similar trends in multiple industries are apparent.

  • Telecommunications Provider e.g. AT&T
  • Networking/Internet IP,MPLS Service Providers
  • Cloud Native Iaas, PaaS, SaaS industry

Let’s see what the trend is that they have in common:

  • Telecom – AT&T ONAP’s DCAE – Data Collection, Analytics and Events
  • Networking/Internet Service Provider – Cisco/Juniper Telemetry
  • Cloud Native – Kafka and streaming events data from Microservices architectures

What they have in common is events & data production and thereafter streaming of the data and thereafter analytics on these events/data resulting in near real time decision making.

The naming is different, the products are different and the industries are different but the production of data, its streaming and analytics is common.

Telecom industry is moving from PNF (Physical Network Functions) to VNF (Virtual Network Functions). Which is a move from tightly coupled hardware/software devices to a more software driven architecture

The ISP industry is still shifting around IP Packets but they are now looking for more streaming style analytics of their devices and the traffic flows which they are calling Telemetry.

The Cloud Native industry is in the pack with its Microservices based software centric application architectures.

They all have event generation in common and want to process the data and then use it in real time. Real Time Data Streaming and Processing.

Lets now see dig a little deeper and start correlating the terminologies. From the products category we will take Telecom’s ONAP, Networking’s Cisco DNA, Cloud Native’s Prometheus and Kafka and Information Security’s Splunk from the industries to analyse.

ONAPs VNF Event Stream or VES is the stream event producer. ONAPs logging section utilises the same ELK, Elastricsearch Logstash and Kibana dashboarding available in AWS cloud.

Juniper’s Telemetry streaming utilises Google’s Protocol Buffers (gpb) structured messages are relayed to a performance management application. Cisco’s Model Driven Telemetry utilises the same Google Protocol Buffers for streaming data from its devices.

Cloud Native applications are Microservices based which has Event Sourcing and CQRS and are requiring Rabbit MQ/Kafka style message brokers in addition to stream processors and analytics such as the same ELK stack mentioned earlier.

Large organisations such as Linkedin faced the problem of data deluge earlier than the rest of the world in terms of its handling, processing and analytics in real time. This has resulted in products such as Kafka.

 

 

Link:

https://wiki.opnfv.org/display/PROJ/VNF+Event+Stream

https://wiki.onap.org/display/DW/Logging+User+Guide

 

 

What does a Site Reliability Engineer do?

 

Site Reliability Engineer is a term for the operations and administration of complex computer systems involving:

 

  • Networking,
  • Virtualized operating system environments including VM/Containers,
  • The orchestrations tools for Networking/Virtualized infrastructure,
  • Applications,
  • The interactions of the above in a multi-site/multi-pop environment,
  • And utilising the above to deliver a service/product and ensuring it is working well.

 

It basically appears to be an operations role but within a complex environment where multiple technology silos interact heavily to deliver the product to the end user. Google hires and assigns the role of Site Reliability Engineer and they are operating such a complex environment delivering Google.com/Gmail.com/Youtube.com etc. Facebook does the same.

 

Looking at a particular set of Site Reliability Engineer job advertisements they appear to have one thing in common for diverse roles within the SRE domain:

 

  • ‘You have an ‘infrastructure as code’ approach to managing infrastructure’

 

So from the Site Reliability Engineer title we reach to the term Infrastructure as a Code approach.

 

Infrastructure as a Code ‘tools’ sort of sit on top of Configurations Management tools like Puppet, Chef, Ansible and provide increased functionality. Terraform and AWS Cloudformation are two Infrastructure as a Code tools but what the Job Ads are asking for is it approach.

 

Coding is common:

  • One thing that is apparent is that when you take a look at an Ansible Playbook YAML .yml file or a Teraform Configuration .tf file or a Chef Recipe .rb file or a Puppet Manifest .pp file or a AWS Cloudformation Template they all look code-like. In fact, they are all code but at a plane where the code is not intended to utilize processor, memory and hard disk of a single machine in a setup.exe resulting file-form to deploy on a single computer. They are coded or code-like data expressions which translate into the deployment, configuration and orchestration of more complicated computing systems. They are code-like expression which for example deploy AWS products which are themselves ‘infrastructure as a service’ public cloud systems.

 

From this it appears that Infrastructure as a Code is a term signifying another layer of abstraction.

There are levels of abstraction. Where the levels can be :

  • solid state physics, silicon/CPU, memory, hard disk hardware.
  • then 0’s and 1’s & bits on top of these at the next level
  • then integers & strings utilizing the above layer
  • then arithmetic operations and string manipulations on top of above
  • then programs and software applications running on computers/devices,
  • then interconnected computers
  • then distributed systems composed of interconnected computers/devices
  • then Public Cloud Infrastructure as a Service
  • then an Application running on this silicon,cpu,memory,hardisk,0’s,1’s, bits, integers, strings, arithmetic operations, string manipulations, individual programs/software, interconnected infrastructure/computers/servers/routers/switches, public cloud.

By inference Infrastructure as a Code approach presents/preserves information that is relevant to our end application plane and environment and abstracts information that is not relevant in our Application’s environment.

The internet is a good examples of multiple systems on top of other systems.

It looks like even Google and Facebook still need a human operate their systems. They will not be flying through them like an aeroplane in clouds. They will know the layers and the systems in place. They will navigate from symptoms to root cause and then codify/rectify & adjust for continual optimal service.

Moving on, three job ads for Site Reliability Engineer are given below.

Infrastructure as Code is common. Lets see the rest.

Job Ad:

  1. Site Reliability Engineer | Data Stores | Redis & Kafka

 

Our Tech Stack across Site Reliability as a whole:

• Data Analytics software including Kafka and Redis
• Open Source technologies (We constantly look to innovate and adopt)
• Amazon Web Services – AWS, and a load of services
• Coding with React, NodeJS and Python
• Couchbase, Kubernetes, ElasticSearch & Microservices Infrastructure
• Linux Operating systems, we look for passion
• Infrastructure as Code & Automate everything are a couple of our mottos

Job Ad:

  1. Site Reliability Engineers | Multiple Roles | Golang | AWS | ReactThe TechStack you will be getting your hands dirty with:

    • Open Source technologies (We constantly look to innovate and adopt)
    • Amazon Web Services – AWS, and a load of services
    • Coding with React, NodeJS and Python
    • Couchbase, Kubernetes, ElasticSearch & Microservices Infrastructure
    • Linux Operating systems, we look for passion
    • Infrastructure as Code & Automate everything are a couple of our mottos

 

Job Ad:

  1. Site Reliability Engineer | Edge Computing | AWS | Networking

 

Our Tech Stack across Site Reliability as a whole:

• Networking – Load balancers, Proxies, Routing, DC, AWS
• Open Source technologies (We constantly look to innovate and adopt)
• Amazon Web Services – AWS, and a load of services
• Coding with React, NodeJS and Python
• Couchbase, Kubernetes, ElasticSearch & Microservices Infrastructure
• Linux Operating systems, we look for passion
• Infrastructure as Code & Automate everything are a couple of our mottos

 

The three SRE roles are diverse and they are geared towards multiple parts of the stack which run the end application. SRE tilting towards Networking, SRE tilting towards Data/Stream Processing and SRE tilting towards Development (Front-End/Back-End).

 

The below are common to all three:

 

  • React, NodeJS and Python
  • Couchbase, Kubernetes, ElasticSearch & Microservices Infrastructure
  • Linux/AWS

 

The below varies amongst them:

 

  • SRE Networking tilted role – Edge Computing, Load balancers, Proxies, Routing, DC, AWS
  • SRE Data/Stream Processing tilted role – Kafka, Redis
  • SRE Dev tilted role – Golang

 

And so what is this system achieving and what is it composed of? How do they interact and what do the multiple SREs do?

 

In terms of programming languages, we have Golang, React, NodeJS and Python.

In terms hardware we have AWS and Edge Computing PoPs/nodes/devices

In terms of data store and streaming we have Kafka and Redis

In terms of containers management there is Kubernetes

In terms of Data retrievals / search and possibly analytics there is ElasticSearch

In terms database there is Couchbase

 

An SRE is not an end-application software developer. So the above listed tools are part of the system to be run. This will be done with infrastructure as a code approach to programify for optimal operations.

 

So lets now try to put the clues in the Job Description together.

 

  • React & NodeJS are Javascript frameworks with React being the User Interface/FrontEnd (used by Facebook UI) and NodeJS being the Server/BackendEnd for Scalable Data I/O. Python can be used as for programming services at various locations. Golang is also used in the the Backend Serverside providing for its concurrency feature for applications/services.
  • Redis can be used to store application state information. In-memory fast, scalable and distributed. It is a key value store provider for application state cache-like.
  • Kafka is a distributed data streaming platform and can be stated to be in the middle. Producers producing data and consumers using data and stream processors processing it are connected to Kafka clusters. It can be used for event streaming/aggregation.
  • With no other stream processing engine present in the Job description Kafka with Kafka Streams can be stated to provide for stream processing as well.
  • ElasticSearch can be used for indexing and search. Data can be copied in via Kafka connector APIs and then indexed. Kibana is not listed but it might have skipped mentioning and can be used for the visualization and dashboarding.
  • Couchbase can be used as a NoSQL JSON-style distributed database as an external store for storage of logs/events (documents). It can take in data and deliver it via its Kafka connector.
  • Kubernetes manages the containers furbishing the application environment.

 

It looks to be a full Cloud Native environment which needs to be kept up and running optimally with continued service.

 

Part of this environment is the networking aspect.  This includes the listed edge computing component which means this high performance cloud native application also has near-user-location edge devices within its architecture.

 

Geolocation Routing and CDNs are the tools used to decrease application latency times. AWS Availability Zones can be considered as multi-site replicated PoPs. Edge networking nodes will also branch off as required and can be mini PoPs. Depending on the size of the user base being serviced by the Edge PoP node it might scale into being a small DC.

 

Branching within the networking domain is the use of Proxies. Forward + Reverse + Side-car if required.

 

A scenario of an increase in application demand resulting in container scaling which can result in requirement of on demand load balancing and proxying. One such tool is the F5 Application Services Proxy which from a networking perspective is a proxy but it integrates with the Kubernetes and can be used for an infrastructure as a coded deployment. F5’s Application Services Proxy is itself a Node.js application but is middleware here.

 

I decided to setup a simple firewall on my Vyatta VM router to block pings from another VM Host (Ubuntu 12.04). The network is entirely in the range 192.168.0.100/24. The Ubuntu Host IP is 192.168. 192.168.0.111

ifconfig_ubuntuThe Vyatta interface IP is 192.168.0.108

vyatta interfacePing between them is working prior to setting up firewall and gives destination port unreachable as soon as firewall is enabled via commit.

ping_enable_disableThe firewall configuration is as below:

firewall_configThe configuration steps are simple and given below:

reject

reject_icmpand then we need to apply the firewall to an interface which in this case is eth0.

apply_interfaceAfter the above configurations are in place enter commit to apply them and ping will stop working.

We can also see the statistics in firewall section changing:

show firewall statisticsand also by name the firewall:

show firewall nameand also show firewall name *** statistics :

show firewall name statistics

In my setup to set up DNS forwarding in the Vyatta router two steps needed to be followed. The first being configure an interface as a listen on interface. I configured this to be eth0 which is bridged to my Wifi rotuer. The second step is to configure a name server. I set this as the default gateway of the network i.e. the Address of the Wifi router. DNS Forwarding

Once done I was able to ping google from my vyatta VM.

ping google

Before an interface can be configured within Vyatta VM it needs to be added into the VMplayer settings. If this is not done it will not appear under Vyatta interfaces. The below snapshot shows my four interfaces which i added via the settings panel.

 

InterfacesAdding an interface is simple. Select Network Adapter and click the +Add sign at the bottom. Bridged interfaces take their IP’s directly from you Guest’s dhcp server. NAT interfaces NAT their addresses from vmnet interfaces while Host Only interface type address can be set manually  from within Vyatta. From within Vyatta the interfaces will appear as below:

interfacesThe configuration will be as below:

interface configurationI have configured all interfaces to take IP from DHCP servers instead of static IP.

 

There’s a number of steps that need to be done to set up your own mini lab of the Brocade Vyatta router. Firstly instead of re-inventing the wheel on my blog please follow all instructions given on the below blog post:

http://vbyron.com/blog/brocade-vyatta-5400-virtual-router-vsphere/

Instead of uploading to Datastore and using vcenter you can use the free VMplayer that VMWare offers.

After completing the installation steps you should be able to SSH into the vyatta VM and also make configuration changes. One additional step I would recommend is to hit the command save after running commit at the end of instruction set.

Cloud Computing Networking is the new (not so new anymore!) buzz word in the industry these days. Having gained some knowledge about the field I would call it a mix of IT System Administration, Networking & Virtualization. I have a compiled a list of links where you can start to get into what it is all about. Here it is:

NIST Definition of Cloud Computing: http://www.nist.gov/itl/csd/cloud-102511.cfm

Ivan Pepelnjak’s Webinars, Screencasts & Blog: http://blog.ioshints.info/ & http://www.ipspace.net/Webinars (They’re definitely worth the investment)

Packet Pushers Podcast: http://packetpushers.net/

CiscoLive365.com: https://www.ciscolive365.com Requires registration. Video and PDF Tutorials from Cisco Experts

http://techfieldday.com: Insightful videos of discussion amongst experts of the field. (See Past events)

Of the above only Ivan Pepelnjak’s Webinars require an investment for yearly or roadmap subscription (Again, they’re worth it). The rest are freely available.

There are Research and Analysis Firms like (Alphabetical): ABI Research, Business Monitor International, Forrester Research, Gartner, Heavy Reading, IDC Research, IHS Publications, Infonetics Research, Informa, Light Reading, Yankee Group etc which are also covering the topic. They provide a wealth of information but cost ‘a lot’.

As a starting point I would recommend:

1. Reading the NIST Definition

2. Listening to some related Packet Pushers Podcasts.

3. Ivan’s Introduction to Virtualized Networking and Cloud Computing related Webinars

Enjoy! & be sure to have fun while Breaking into the field of Cloud Computing Networking.