Archive

Uncategorized

Infrastructure as Code has two main sections to it. The first is running the code itself and executing the change onto the cloud platform. The second is maintaning the version control of the code. i.e multiple changes by multiple people to the code.

For executing the change one could use Ansible or Terraform.

From a high level you simply run an Ansible playbook while having changed the variables files to make changes to the environemnt. One could do this without getting too much into the way Ansible is working. Inside Ansible there are roles and tasks which divide the execution of the playbooks in a structured format so that writing complex playbooks is easier.

The second part regarding version control of the code is required because there are multiple people in a team making multiple changes to the Infra as Code variables. So for example one person could be adding a firewall rule subnet to one firewall and another person could be adding a firewall rule to another firewall. So if you imagine that all the firewall rule subnets are actually present in one variables file for all the firewalls then you need version control to coordinate these two changes to the file. This version control is done by Git and Bitbucket mostly and these are the two famous tools to maintain software code versioning.

This is definitely similar to what any large software system build and maintenance would require where multiple software developers are writing code and changing code in all sorts of code files at the same time so you need a version control system to maintain consistency. These have push and pull mechanism where when you make a change locally and push it onto the main file and at the master file you pull the change. It also has peer review mechanisms where other team members can review your code differences before they allow your code to enter the main repository.

To conclude, imagine you have 30 Azure VNETs (Network VRFs) and 30 Azure Firewalls in your product deployment. As people ask you regularly to make firewall rule changes and add and delete subnets it requires either manually going to the Azure web portal and making changes their or you could use Infrastructure as Code and make the changes via Ansible playbooks and git/bitbucket variable files.

This post will cover multicloud networking integration between multiple public clouds and on prem network. Imagine four clouds three being AWS, Azure and GCP and the fourth being the on prem private cloud which is basically a Data Center network.

All these four clouds will be glued together somehow and that glueing will be the multicloud scenario. The basic requirements would be to have switching, routing, firewalling and load balancing equipment present within the glueing network between the four clouds.

Switching would be present to trunk layer 2 between IP endpoints. Routing and routing protocols like BGP would be there to exchange the IP endpoints reachability information to populate routing tables and get the Nexthops.

IP planning would be involved in the sense that the On Prem and the three public clouds dont have duplicate conflicting IP address spaces and there aren’t two endpoints in the network which are generating packets with the same source IP address.

In essence if there a single routing table present in your environement which has routes for all three Public cloud endpoint subnets and also the routes for the on prem DC network then you have multicloud established.

Wherever this routing table exists from that location there will be Layer 2 swithcing links and trunks into the three clouds and On-Prem until the trunks reach the other routing tables within the clouds, be it Azure VNET routing table, AWS/GCP VPC routing tables or On-Prem DC Routing Tables.

This multi-cloud environment is somewhat similar to large Service Provider public internet networks we are all familiar with where each large SP can be considered a cloud in itself with routes being exchange with the other large SP i.e. similar to cloud routes over BGP.

The SP environment are mostly used for traffic passing through whereas in the multi-cloud enterprise environemnt there are Data Sources and Data Sinks in either the On-Prem or in the Public Clouds. There is also the difference that the glueing network in the middle will have firewalling too.

Lets say there is a new connection required to a VPC subnet in a AWS region. Firstly the layer 2 would be provisioned over the AWS Direct Connect either directly with AWS or with partners like Megaport. For the majority of the cases the on-prem device which connects to the direct connect service will be provisioned with a new VLAN.

Once this is done this layer 2 will be trunked to the on prem device where IP endpoint is provisioned and the routing table exists. This could be a firewall or a router. This is where the packets will decide on the next hops.

On-Prem firewall filtering is in the path where the different DMZ regions, different IP Subnets and L4 Ports are allowed or disallowed to communicate with each other. If the On-Prem device with the routing table containing the multi cloud routes is a firewall things are simpler in the sense that the firewall filters are present on the same device and the different clouds are treated as different DMZ zones.

This multicloud networking scenario is a routing environment which has multiple routing domains as spokes linked via a hub site. This hub site is the on-prem glueing routing table. There would be the addition of firewalling capability within this environment so as to be able to govern and allow/disallow traffic between these environments. Another addition could be a load balancer within the glueing on-prem environment.

This load balancer would spray traffic onto either on-prem DC subnets IP endpoint servers or onto the public cloud subnets housing cloud servers. This would mean that there will be public facing IPs which receive the traffic which is natted onto Private IPs and then it is loadbalanced onto the multiple server endpoints be it in Public clouds or in On-Prem DC.

So the load balancer would have the load balanced front end IP to Server IP bindings going towards either a public cloud endpoint or an on-prem endpoint. This would mean that the load balancer connects to the glueing routing table entity as well to send/receive traffic to server IPs.

This mix of route, switch, firewall, load balancer is an example of a typical multicloud network connecting multiple public clouds.

Habib

As fresh Pakistani engineers start leaving their country on Washington Accord visas one wonders whether back home Digital Policies are being framed which could be sealing their jobless fates.

Let’s check the numbers. If half of Pakistanis generate only 5 MB of Data in one day on government run Digital Pakistan then it would amount to 500 million MB in a Day. This is half petabytes per day. This will only keep growing. All this data, it’s processing and it’s related networking will possibly be run on equipment which will only add to the import bill if Pakistan doesn’t manufacture it’s own servers. It would also traverse imported networking routers and switches which would add to the import bills if Pakistan doesn’t manufacture it’s own network equipment. All of these would also be put in Data centers which could be using Racks and Cabling possibly all imported.

How many jobs will imported servers, imported switches, imported routers, imported racks, imported DC HVAC and imported Data Center cabling produce ? And what will be the import bill of these Digital Pakistan backend items ?

Another aspect of these imported items is their lack of Cybersecurity from a National Security perspective. If it’s imported and all plug and configure only with unknown hardware and unknown software it will be considered a black box and totally insecure in terms of Cybersecurity.

A further aspect of these imported items is that each item comes with support contracts in case they fail and have a problem. These are very expensive support agreements with their manufacturers and will add to running cost and yearly import bills.

Now consider that a while back the aeronautical complex in Risalpur launched its own tablet, the PAC PAD Takhti 7. https://en.m.wikipedia.org/wiki/PAC-PAD_Takhti_7. How did that happen and why can’t we make our own Digital Pakistan equipment. How is it possible that Pakistan can make parts of JF-17 thunder and indigenously manufacture multiple types of missiles and also make a nuclear bomb but not make it’s own servers, routers, switches, DC HVAC and DC Cabling ?

Much of these IT equipments are now open sourced. Servers, Routers and switches under OCP and there is MIPSOpen and multiple open source Network Operating Systems. Positive results are really possible in case solid effort is made for local manufacturing.  At least Cybersecurity mandates that the Hardware assembly and Software assembly and their System Integration is carried out within Pakistan. This will create Jobs and reduce the import bills too.

Let’s hope for the best.

This post seeks to distinguish between the multiple aspects and phases of networking projects. Network Architecture and Network Design are the phases of a networking project carried out first. Then comes the Project Implementation phase along with configurations by Network Engineers.

Some experts have included an Analysis phase as part of, or before, the Network Architecture phase. The concepts being that first an analysis needs to be done on the flows expected from the new network.

Before Network Architecture the Analysis phase consists of gathering the User Requirements, Application Requirements, Application Types, Performance Requirements, Bandwidth Requirements, Delay Requirements etc. After gathering these requirements a Customer Requirements Document (CRD) can be made consisting of all the expectations and requirements from the network. This document will assist with project management throughout the network life cycle and for sufficiently large projects its a good exercise.

Once the requirements are gathered a Flow Analysis can be done to identify the flows required from the network. Data Source and Data Sinks, Critical Flows and per Application flows etc. are analyzed as part of Flow Analysis exercise.

Once the requirements are known and flows are known this can lead to decisions regarding the Network Architecture. The Network Architecture term is generally used with the Network Design term as one but according to one definition it is distinguished from Network Design such that the Architecture consists of the technological architecture while the design consists of specific networking devices selected and vendors selected for the architecture to be implemented on ground. This means, for example, that the Network Architecture will deal with whether to use OSPF or ISIS and how to use them and the Network Design will cover which specific vendor router to use. They are closely linked.

Once the flows are known it can be discussed what the architecture can be. This will consist of primarily deciding the protocols, the addressing and the routing architecture which can be used to facilitate the required flows. Once it is decided which network technologies to use for the flows (such as OSPF, ISIS, MPLS, L2VPN, L3VPN, IPSec, BGP, Public Internet, VXLAN, EVPN, Ethernet etc) a diagram can be made of the architecture. Multiple iterations and permutation of the various architectures will come forward from the discussions over what the architecture could be to facilitate all the flows and provide a resilient network. For each of the protocols listed above, and any other to be used, the clogs available in each can be discussed in detail. It can be discussed and decided regarding how the combinations of multiple protocols will be used to meet all the flows and meet the requirements from the network. If there are cloud connectivity requirements it will be discussed how (which protocol) and where to connect to the cloud. Once an architecture is decided and protocols are selected and the tools within the protocols which are to be used are listed then they can be summed up in a document and in diagrams.

After this phase comes the Design decisions phase. This is close to the architecture phase but this is where the vendor of that OSPF router is selected. This is where the specific router is selected from the multiple router offerings available from the selected vendor. Device vendor selection and specific device selection is a task of its own and is a separate effort in networking projects.

Also as part of the Design it will also be decided which Service Provider to use for Internet and WAN links. It will be decided which service offering will be used from the SP Vendor. If the application and system contain Public Cloud use (including Hybrid On-Prem) than it will be decided which specific connectivity mechanism and location the cloud will connect to. Will it be IPSec over Internet or over Direct Connect and where and how. Will it be the biggest MPLS VPN provider on the market or the smaller one. Will it be the biggest BGP Internet Transit provider or the smaller one.

Once the requirements are known; Once the flows are knows ; Once protocols and architecture is known ; Once the device vendors and device type and SP offerings are known and once all of these are selected than comes the implementation phase.

Engineering is a broad term which can encompass all of the above and more but as things stand here we can say that a Network Engineer as part of the engineering phase will configure and deploy the devices, configure and deploy the WAN links, configure and deploy the Internet links, configure and deploy the cloud connectivity VPNs and configure and deploy the interconnections in the network. This network engineering implementation effort is after the Requirements/Flows/Arch/Design phase as its an effort on ground and on site to implement the network and make things run. Up until this phase all the previous phases were on paper and this one is on ground practical work.

The previous Requirements/Flow/Protocols Architecture/Design and even initial aspects of the engineering phase can be done in office in meeting rooms. Initial aspects of engineering phase consisting of configurations and parameters to be used can be also decided before going out in the field. Once on ground and on site implementation starts than this is an effort of its own and can be considered as Project Deployment and Project Implementation. It entails device delivery, WAN link delivery, device power on, WAN link testing, Internet Link testing, Cloud VPN delivery, configurations and testing etc. This is a phase of its own and is an effort which is more akin to technical project management as well as it is more of an on ground project coordination and project management effort too. This is because of its physical, geographical and on site implementation aspects.

Depending on the type of project the implementation phase can consist of outage windows and maintenance windows and a lot of coordination to implement the new devices and new links.

Hence we can say that a networking project consists of separate requirements gathering, flows analysis, architecture, design and implementation phases. This means that a networking project can be divided into smaller multiple projects each consisting of these above phases. Each phase also requires a skill of its own. For example the Requirements, Flow Analysis, Architecture and Design phases are generally handled by Network Architects, Solution Architects and Network Design Engineers. The configuration and deployments aspect is handled more by Network Engineers and the Project implementation and coordination efforts are handled by Project Managers.

Multiple and simultaneously such large scale projects having all these phases going on at various levels would be run under a Program given the size of the organization is sufficiently large and that there are multiple streams of such projects being carried out.

I hope you enjoyed the good read.

Happy networking.

Habib

Information is present in computing platforms in two forms.

– Bits that are stored
– Bits that are traveling and transitioning

Securing bits that are stored and bits that are traveling and transitioning is a task.

These two forms present their own challenges but the bits that are traveling and transitioning i.e. changing forms within the computing platforms have acquired special attention. This is due to the prevalent pervasive communications using information technology computing platforms within society and businesses. When bits transition and travel they are also stored and retrieved from storage so securing both is important.

The only mystery surrounding the field of security is the presence of the all so many interaction surfaces between hardware layers and software layers through which transitions and traveling of bits occurs. From seeing text on the screen with ones eyes to thinking and considering it to thereafter editing it via hands there exists industries working within the human body which occur without us contemplating over them. There are interaction surfaces with the body as well. With muscular, neural, skeletol, etc working together to name a few.

Within computing platforms as the bits transition back and forth within one component i.e. one isolated CPU, RAM, HardDisk, Operating System and Application Software they present their own security challenge. When instead of isolation the bits travel between 2 such computing systems they present a different set of challenges. When there exists industrial scale, constant, consistent, ongoing back and forth travel and transitioning within milliseconds over large geographies between hundreds and thousands of components of various types it presents a completely different set of challenges.

Interaction surfaces are where bits change hands between subsystems. For example bits changing hands between the operating system and an application running on it or bits changing hands between one PC and another PC over a network. Interaction surface is when one subsystems surface interacts with another subsystems surface within the larger system and bits run. As the field of information technology and computing has evolved and progressed the number and types of subsystems, their surfaces and their interactions has increased a lot. So much so that securing them has become complicated. Wholesome security is therefore achieved when every time bits change hands i.e. transition and travel the interaction is secure. It is secure in the form that the storage at each end of change of hands is secure and the medium of exchange is secure.

Now it is simple to state in general english that when one subsystem interacts with another subsystem and bits change hands the storage points at each end and the medium used for the interaction and travel should be secure. Given timescale and geographical scale when it comes to reality the shear number and types of subsystems, the number and types of storage locations and the number and types of exchange mediums is so large that encompassing all of them becomes difficult.

Another incision into the security domain is cut deep into the system when the human computer interaction surface appears at various locations and in various forms. This increases the complexity of the whole security domain. Bit to Human interaction surface also needs to be kept secure at each interaction, at each geographical location and every time.

Furthermore another aspect is when one secure system under the ownership of one entity interacts with another system owned by another entity. This is therefore a time when bits are changing hands amongst different owners of them. The time and location of such an interaction surface presented between two separate ownerships also increases complexity. As your bits are stored under the ownership of another entity and accessed and retrieved by other people a whole system of management is required for such inter-ownership bit storage and bit travel interaction surfaces.

I guess a chart showing the whole variety of interaction surfaces within computing would demystify security. The reason for this is that each entry in the chart i.e. each interaction surface would be simply mapped to the precaution and action required for securing it. Each type of interaction surface would require a security precaution and actionable item within the security framework.

Be it an interaction surface where bits are:
– stored in hardware
– being processed by one set of software
– within one computer
– on a server
– in an application
– traveling over a network
– interacting with humans
– being exchanged between different humans
– being exchanged between different entities

Providing Layer 2 VPN and Layer 3 VPN services has been a requirement of enterprises from Service Providers. Similarly Data Center networks need to provide Layer 2/3 Overlay facility to applications being hosted.

EVPN is a new control plane protocol to achieve the above . This means it coordinates the distribution of IP and MAC addresses of endpoints over another network. This means it is has its own protocol messages to provide endpoint network addresses distribution mechanism. In the Data Plane traffic will be switched via MPLS Labels next hop lookups or IP next hop lookups.

To provide for a new control plane with new protocol messages providing new features BGP has been used. So it is BGP Update messages which are used as the carrier for EVPN messages. BGP connectivity is first established and messages are exchanged. The messages exchanged will be using BGP and in them EVPN specific information will be exchanged.

The Physical layer topology can be a leaf spine DC Clos fabric of a simple Distribution/Core setup. The links between the nodes will be Ethernet links.

One aspect of EVPN is that the terms Underlay and Overlay are now used. Underlay represent the underlying protocols on top of which EVPN runs. These are the IGP (OSPF,ISIS or BGP), and MPLS (LDP/SR).  The underlay also includes the Physical Clos or Core/Distribution topology which has high redundancy built into it using fabric links and LACP/LAGs. The Overlay is the BGP EVPN vitual topology itself which uses the underly network to build a virtual network on top. It is the part of the network which related to providing tenant or vpn endpoints reachability. i.e. MAC address or VPN IP distribution.

It’s a new protocol and if you look at the previous protocols there is little mechanism to provide all active multihoming capability. This refers to one CE being connected via two links to two PEs and both links being active and providing traffic path to far end via ECMP and Multipathing. 2 Chassis multichassis lag has been one option for but it is proprietary per vendor and causes particular virtual chassis link requirement limits. Ingress PE to multiple egress PE per flow based load balancing using BGP multipathing is also newly enabled by EVPN.

There is also little mechanism in previous generation protocols to provide efficient fabric bandwidth utilization for tenant/private networks over meshed-style links. Previous protocols provide single active and single paths and required LDP sessions and tunnels for full mesh over a fabric. MAC learning in BGP over underlay provides this in EVPN.

Similarly there is no mechanism to provide workload (VM) placement flexibility and mobility across a fabric. EVPN provides this via Distributed Anycast Gateway.

 

I attended the Amazon Network Development Engineer tech talk held in Sydney yesterday. While fishing for future Network Development Engineers Amazon gave a short presentation on their network from a DC and DCI/WAN perspective.

It was a good talk and the interaction with the Network Development Engineers afterwards was insightful. A lot of their work is circling around Automation and Scripting. This is also obvious from the Job title and the Job Descriptions for the role advertisements.

This posts focuses on the trend of Microservices and the various related terminologies and trends. In the end it lists the brands in their categories.

An application is software. It is composed of different components. These are the application components. Together they make up the application. The difference between one application software component and another application software component is one of separation of concerns. This is simply dividing a computer program (the application) into different sections. If the different components are somewhat independent of each other they are termed loosely coupled.

The different components of an application communicate with each other. When they need to interact with each other they do it via interfaces. A client component does not need to know the inner workings of the other application software component and uses only the interface.

This is where the word service comes into play where what one application software component provides to another software component is called a service.

Now this application may be placed on a distributed system where its different components are located on networked computers. Thereafter in terms of an application running on a distributed system, SOA or Service Oriented Architecture is where services are provided to other software components over a communications protocol over a network.  This is due to the underlying hardware being networked and distributed in nature and the application software on them being distributed on it.

In terminology of Distributed Systems when when one of its components communicates with another component they do this via messages. We can say that in a distributed system, an application’s software component sends a message to another software component to utilise its service via an interface and that interface is also utilising a network protocol.

We now know about an Application which is a software program, its components and that services are provided by its components. We now know about Distributed Systems, its components networked together and messages being passed between them over a network. We know about applications running on distributed systems where application software components are running on components of the distributed system. We know the application software components communicate with each other via a network.

In Microservices a distributed systems component is running an applications software component and is providing a service. It’s a process now in execution mode. So one software component is placed and is running on one distributed system component and is providing a service from there to other similar independent components.

A normal process is a running software program in execution mode. Inter Process communications are IPCs in terms of processes. In Microservices IPCs will be network messages.

What we discussed above earlier is the application software architecture and its transition into the distributed systems environment. When you say that each independent software component is now running, is a process, it is running on a distributed systems components and the Inter Process Communications are over a network you have Microservices. These Microservices form an Application.

Furthermore, in Microservices there is a bare minimum of centralized management of different services and they may be written in different programming languages and use different data storage technologies. So we can have one software component written in Go, and another in NodeJS and they will provide each other services. These services will also be over a network. So a Go software component can be running on one distributed system component and a NodeJS software component can be running on another distributed system component and they will interact via the network composing the distributed system. Multiple such distributed software components providing services to each other make up a Microservices Application.

A container provides an environment to run a microservice component. A container is a distributed system object which can be termed loosely as a distributed system hardware+software components service.

In terms of branding:

Amazon AWS is a Distributed Systems Provider.

EC2 is Amazon AWS’s product to provide a distributed system compute component online.

S3 is Simple Storage Service, a product for simple storage of files by Amazon AWS online.

DynamoDB is Amazon AWS’s NoSQL Database product which available as a product online.

Golang and NodeJS are programming languages in which backend server side software components are written.

React is a programming language in which frontend user side application software components are written.

Docker is a software which provides for individual container management. One container provide the environment where a software component can be executed on a distributed system.

Kubernetes and Docker Swarm manages multiple (lots of) containers deployed on distributed systems for running a distributed application. They are for containers management.

RabbitMQ and Kafka work as message brokers for passing messages between microservices

RESTFul HTTP APIs are also a means for intermicroservice communication.

Protocol Buffers and GRPC are means of faster intermicroservice communication messaging.

MongoDB and Couchbase are NoSQL databases which can be run in containers and be utilised by application software components for Database purposes.

Git is an application software component version control system

Promethues is an application (software) to be run (can be in containers) built specifically for the purpose of monitoring microservices software component health (metrics)

Grafana is an application (software) to be run (can be in containers) for the purpose visualizing metrics/health of microservices.

ELK stack which is ElasticSearch, Logstash and Kibana are softwares which provide for logging of events and their search and visualization.

https://en.wikipedia.org/wiki/Component-based_software_engineering

https://en.wikipedia.org/wiki/Event-driven_architecture

https://en.wikipedia.org/wiki/Service-oriented_architecture

http://www.d-net.research-infrastructures.eu/node/34

https://martinfowler.com/articles/microservices.html

https://en.wikipedia.org/wiki/Process_(computing)

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45406.pdf

 

In my setup to set up DNS forwarding in the Vyatta router two steps needed to be followed. The first being configure an interface as a listen on interface. I configured this to be eth0 which is bridged to my Wifi rotuer. The second step is to configure a name server. I set this as the default gateway of the network i.e. the Address of the Wifi router. DNS Forwarding

Once done I was able to ping google from my vyatta VM.

ping google