Archive

Uncategorized

Information is present in computing platforms in two forms.

– Bits that are stored
– Bits that are traveling and transitioning

Securing bits that are stored and bits that are traveling and transitioning is a task.

These two forms present their own challenges but the bits that are traveling and transitioning i.e. changing forms within the computing platforms have acquired special attention. This is due to the prevalent pervasive communications using information technology computing platforms within society and businesses. When bits transition and travel they are also stored and retrieved from storage so securing both is important.

The only mystery surrounding the field of security is the presence of the all so many interaction surfaces between hardware layers and software layers through which transitions and traveling of bits occurs. From seeing text on the screen with ones eyes to thinking and considering it to thereafter editing it via hands there exists industries working within the human body which occur without us contemplating over them. There are interaction surfaces with the body as well. With muscular, neural, skeletol, etc working together to name a few.

Within computing platforms as the bits transition back and forth within one component i.e. one isolated CPU, RAM, HardDisk, Operating System and Application Software they present their own security challenge. When instead of isolation the bits travel between 2 such computing systems they present a different set of challenges. When there exists industrial scale, constant, consistent, ongoing back and forth travel and transitioning within milliseconds over large geographies between hundreds and thousands of components of various types it presents a completely different set of challenges.

Interaction surfaces are where bits change hands between subsystems. For example bits changing hands between the operating system and an application running on it or bits changing hands between one PC and another PC over a network. Interaction surface is when one subsystems surface interacts with another subsystems surface within the larger system and bits run. As the field of information technology and computing has evolved and progressed the number and types of subsystems, their surfaces and their interactions has increased a lot. So much so that securing them has become complicated. Wholesome security is therefore achieved when every time bits change hands i.e. transition and travel the interaction is secure. It is secure in the form that the storage at each end of change of hands is secure and the medium of exchange is secure.

Now it is simple to state in general english that when one subsystem interacts with another subsystem and bits change hands the storage points at each end and the medium used for the interaction and travel should be secure. Given timescale and geographical scale when it comes to reality the shear number and types of subsystems, the number and types of storage locations and the number and types of exchange mediums is so large that encompassing all of them becomes difficult.

Another incision into the security domain is cut deep into the system when the human computer interaction surface appears at various locations and in various forms. This increases the complexity of the whole security domain. Bit to Human interaction surface also needs to be kept secure at each interaction, at each geographical location and every time.

Furthermore another aspect is when one secure system under the ownership of one entity interacts with another system owned by another entity. This is therefore a time when bits are changing hands amongst different owners of them. The time and location of such an interaction surface presented between two separate ownerships also increases complexity. As your bits are stored under the ownership of another entity and accessed and retrieved by other people a whole system of management is required for such inter-ownership bit storage and bit travel interaction surfaces.

I guess a chart showing the whole variety of interaction surfaces within computing would demystify security. The reason for this is that each entry in the chart i.e. each interaction surface would be simply mapped to the precaution and action required for securing it. Each type of interaction surface would require a security precaution and actionable item within the security framework.

Be it an interaction surface where bits are:
– stored in hardware
– being processed by one set of software
– within one computer
– on a server
– in an application
– traveling over a network
– interacting with humans
– being exchanged between different humans
– being exchanged between different entities

Providing Layer 2 VPN and Layer 3 VPN services has been a requirement of enterprises from Service Providers. Similarly Data Center networks need to provide Layer 2/3 Overlay facility to applications being hosted.

EVPN is a new control plane protocol to achieve the above . This means it coordinates the distribution of IP and MAC addresses of endpoints over another network. This means it is has its own protocol messages to provide endpoint network addresses distribution mechanism. In the Data Plane traffic will be switched via MPLS Labels next hop lookups or IP next hop lookups.

To provide for a new control plane with new protocol messages providing new features BGP has been used. So it is BGP Update messages which are used as the carrier for EVPN messages. BGP connectivity is first established and messages are exchanged. The messages exchanged will be using BGP and in them EVPN specific information will be exchanged.

The Physical layer topology can be a leaf spine DC Clos fabric of a simple Distribution/Core setup. The links between the nodes will be Ethernet links.

One aspect of EVPN is that the terms Underlay and Overlay are now used. Underlay represent the underlying protocols on top of which EVPN runs. These are the IGP (OSPF,ISIS or BGP), and MPLS (LDP/SR).  The underlay also includes the Physical Clos or Core/Distribution topology which has high redundancy built into it using fabric links and LACP/LAGs. The Overlay is the BGP EVPN vitual topology itself which uses the underly network to build a virtual network on top. It is the part of the network which related to providing tenant or vpn endpoints reachability. i.e. MAC address or VPN IP distribution.

It’s a new protocol and if you look at the previous protocols there is little mechanism to provide all active multihoming capability. This refers to one CE being connected via two links to two PEs and both links being active and providing traffic path to far end via ECMP and Multipathing. 2 Chassis multichassis lag has been one option for but it is proprietary per vendor and causes particular virtual chassis link requirement limits. Ingress PE to multiple egress PE per flow based load balancing using BGP multipathing is also newly enabled by EVPN.

There is also little mechanism in previous generation protocols to provide efficient fabric bandwidth utilization for tenant/private networks over meshed-style links. Previous protocols provide single active and single paths and required LDP sessions and tunnels for full mesh over a fabric. MAC learning in BGP over underlay provides this in EVPN.

Similarly there is no mechanism to provide workload (VM) placement flexibility and mobility across a fabric. EVPN provides this via Distributed Anycast Gateway.

 

I attended the Amazon Network Development Engineer tech talk held in Sydney yesterday. While fishing for future Network Development Engineers Amazon gave a short presentation on their network from a DC and DCI/WAN perspective.

It was a good talk and the interaction with the Network Development Engineers afterwards was insightful. A lot of their work is circling around Automation and Scripting. This is also obvious from the Job title and the Job Descriptions for the role advertisements.

This posts focuses on the trend of Microservices and the various related terminologies and trends. In the end it lists the brands in their categories.

An application is software. It is composed of different components. These are the application components. Together they make up the application. The difference between one application software component and another application software component is one of separation of concerns. This is simply dividing a computer program (the application) into different sections. If the different components are somewhat independent of each other they are termed loosely coupled.

The different components of an application communicate with each other. When they need to interact with each other they do it via interfaces. A client component does not need to know the inner workings of the other application software component and uses only the interface.

This is where the word service comes into play where what one application software component provides to another software component is called a service.

Now this application may be placed on a distributed system where its different components are located on networked computers. Thereafter in terms of an application running on a distributed system, SOA or Service Oriented Architecture is where services are provided to other software components over a communications protocol over a network.  This is due to the underlying hardware being networked and distributed in nature and the application software on them being distributed on it.

In terminology of Distributed Systems when when one of its components communicates with another component they do this via messages. We can say that in a distributed system, an application’s software component sends a message to another software component to utilise its service via an interface and that interface is also utilising a network protocol.

We now know about an Application which is a software program, its components and that services are provided by its components. We now know about Distributed Systems, its components networked together and messages being passed between them over a network. We know about applications running on distributed systems where application software components are running on components of the distributed system. We know the application software components communicate with each other via a network.

In Microservices a distributed systems component is running an applications software component and is providing a service. It’s a process now in execution mode. So one software component is placed and is running on one distributed system component and is providing a service from there to other similar independent components.

A normal process is a running software program in execution mode. Inter Process communications are IPCs in terms of processes. In Microservices IPCs will be network messages.

What we discussed above earlier is the application software architecture and its transition into the distributed systems environment. When you say that each independent software component is now running, is a process, it is running on a distributed systems components and the Inter Process Communications are over a network you have Microservices. These Microservices form an Application.

Furthermore, in Microservices there is a bare minimum of centralized management of different services and they may be written in different programming languages and use different data storage technologies. So we can have one software component written in Go, and another in NodeJS and they will provide each other services. These services will also be over a network. So a Go software component can be running on one distributed system component and a NodeJS software component can be running on another distributed system component and they will interact via the network composing the distributed system. Multiple such distributed software components providing services to each other make up a Microservices Application.

A container provides an environment to run a microservice component. A container is a distributed system object which can be termed loosely as a distributed system hardware+software components service.

In terms of branding:

Amazon AWS is a Distributed Systems Provider.

EC2 is Amazon AWS’s product to provide a distributed system compute component online.

S3 is Simple Storage Service, a product for simple storage of files by Amazon AWS online.

DynamoDB is Amazon AWS’s NoSQL Database product which available as a product online.

Golang and NodeJS are programming languages in which backend server side software components are written.

React is a programming language in which frontend user side application software components are written.

Docker is a software which provides for individual container management. One container provide the environment where a software component can be executed on a distributed system.

Kubernetes and Docker Swarm manages multiple (lots of) containers deployed on distributed systems for running a distributed application. They are for containers management.

RabbitMQ and Kafka work as message brokers for passing messages between microservices

RESTFul HTTP APIs are also a means for intermicroservice communication.

Protocol Buffers and GRPC are means of faster intermicroservice communication messaging.

MongoDB and Couchbase are NoSQL databases which can be run in containers and be utilised by application software components for Database purposes.

Git is an application software component version control system

Promethues is an application (software) to be run (can be in containers) built specifically for the purpose of monitoring microservices software component health (metrics)

Grafana is an application (software) to be run (can be in containers) for the purpose visualizing metrics/health of microservices.

ELK stack which is ElasticSearch, Logstash and Kibana are softwares which provide for logging of events and their search and visualization.

https://en.wikipedia.org/wiki/Component-based_software_engineering

https://en.wikipedia.org/wiki/Event-driven_architecture

https://en.wikipedia.org/wiki/Service-oriented_architecture

http://www.d-net.research-infrastructures.eu/node/34

https://martinfowler.com/articles/microservices.html

https://en.wikipedia.org/wiki/Process_(computing)

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45406.pdf

 

In my setup to set up DNS forwarding in the Vyatta router two steps needed to be followed. The first being configure an interface as a listen on interface. I configured this to be eth0 which is bridged to my Wifi rotuer. The second step is to configure a name server. I set this as the default gateway of the network i.e. the Address of the Wifi router. DNS Forwarding

Once done I was able to ping google from my vyatta VM.

ping google