I listened to the recent Day Two Cloud Packet Pushers podcast. The team discussed AWS Outpost and EC2 At the Edge. It was interesting to find out about AWS products where they ship pre-built racks to local regions nearer to the customer. The racks have equipment used to run some of the regular AWS services and are fully managed by AWS. These local racks appear as usable in the AWS console.

Latency is a significant concern for some applications. Customers vying for low latency services can deploy AWS Outpost and EC2 at the Edge to significantly reduce latency.

As usual, interesting times ahead in technology.

Link to the podcast:

Day Two Cloud 180: Understanding AWS EC2 At The Edge

Today I chatted with ChatGPT and asked it various questions regarding Enterprise Architecture, Network Engineering, Platform Engineering and Site Reliability Engineering.

I thoroughly enjoyed the discussion (chat). The answers looked good. Here are the questions and ChatGPT’s responses:

————————————————————————–

Hi, how are you

Hello! I’m an AI and do not have the ability to experience emotions, but thank you for asking. How can I help you today?

Habib Lutfullah

Can you tell me something useful about Enterprise Architecture ?

Enterprise Architecture (EA) is a method of describing an organization’s overall structure and design, including its business processes, information systems, and technology infrastructure. The goal of EA is to ensure that an organization’s various elements are aligned and working together effectively in order to achieve its goals and objectives. This is often done by creating a comprehensive set of models and diagrams that describe the organization’s current state and desired future state, as well as a plan for transitioning from the current state to the desired state. Some of the key benefits of Enterprise Architecture include:

  • Improved alignment of IT with business goals and objectives
  • Improved communication and collaboration between different parts of the organization
  • Improved ability to plan for and adapt to change
  • Reduced costs and improved efficiency
  • Improved ability to govern and manage enterprise-wide IT
  • Improved security and compliance

EA practice is composed by several frameworks and methodologies, TOGAF is one of the most widely adopted one, The Open Group Architecture Framework.

Habib Lutfullah

So, what is business architecture ?

Business architecture is a subset of enterprise architecture that focuses specifically on the business side of an organization. It describes the organization’s overall business structure and design, including its business processes, value streams, products, services, and customers. The goal of business architecture is to align an organization’s business strategy, goals, and objectives with its overall structure and design, so that the organization can operate as efficiently and effectively as possible.

Some of the key elements of business architecture include:

  • Business capabilities: the high-level functions that an organization must perform in order to achieve its goals and objectives
  • Business entities: the key concepts and components that make up an organization’s business, such as products, services, customers, and suppliers
  • Business processes: the steps and activities that an organization performs in order to deliver value to its customers and achieve its goals and objectives
  • Business rules and policies: the constraints and guidelines that govern an organization’s operations and decision-making

Business architecture is often used as a foundation for designing and implementing new business processes and systems, as well as for identifying and addressing any gaps or inconsistencies in an organization’s existing business architecture.

Habib Lutfullah

So what’s the difference between business architecture and enterprise architecture ?

Enterprise architecture (EA) and business architecture (BA) are related but distinct disciplines. Both aim to align an organization’s structure and design with its goals and objectives, but they do so in different ways and at different levels of abstraction.

Enterprise Architecture (EA) is a holistic view of the organization, it covers all aspects of the organization, from the business side to the technology side. Enterprise architects describe the organization’s overall structure and design, including its business processes, information systems, and technology infrastructure, in order to ensure that the organization is operating as efficiently and effectively as possible. They usually work at the enterprise level, looking across all business units and functions, and often use frameworks like TOGAF.

Business Architecture (BA), focuses specifically on the business side of an organization. Business architects describe the organization’s overall business structure and design, including its business processes, value streams, products, services, and customers. They align an organization’s business strategy, goals, and objectives with its overall structure and design, so that the organization can operate as efficiently and effectively as possible. BA is a subset of EA, working at the level of a business unit, function or product.

Both EA and BA can be seen as complementary disciplines, where EA provides the overall context, governance and roadmap and BA the specific one. In most cases they work together in order to guarantee an organization’s goals and objectives are reached and aligned with the overall strategy.

Habib Lutfullah

Can you tell me some details about TOGAF ?

TOGAF (The Open Group Architecture Framework) is a widely-used framework for enterprise architecture. It provides a comprehensive approach for designing, planning, implementing, and governing enterprise information systems. TOGAF is developed and maintained by The Open Group, an international vendor- and technology-neutral consortium.

The TOGAF framework is composed of four main components:

  • The Architecture Development Method (ADM): a step-by-step approach for developing and implementing an enterprise architecture. The ADM is iterative and can be tailored to the specific needs of an organization.
  • The Architecture Content Framework: a set of templates and models that can be used to describe an organization’s enterprise architecture. These include models for the business, data, application, and technology architecture.
  • The TOGAF Reference Models: a set of reference models that provide a common language and framework for describing the various aspects of an enterprise architecture. These include the TOGAF Architecture Development Method, the Architecture Content Framework, and the TOGAF Technical Reference Model.
  • The Architecture Capability Framework: describes the processes, roles, skills, and deliverables required to establish and operate an architecture capability within an organization.

TOGAF’s ADM is organized in 9 phases:

  1. Architecture Vision: Define the goals, objectives, and stakeholders for the enterprise architecture project, and develop an architecture vision that describes the desired end state.
  2. Business Architecture: Identify the business capabilities, business entities, and business processes needed to support the organization’s goals and objectives.
  3. Data Architecture: Describe the data entities and data architecture needed to support the organization’s business processes.
  4. Application Architecture: Describe the applications and application architecture needed to support the organization’s business processes and data architecture.
  5. Technology Architecture: Describe the technology infrastructure and technology architecture needed to support the organization’s application architecture.
  6. Opportunities and Solutions: Identify and prioritize opportunities for improvement, and develop a plan for implementing solutions.
  7. Migration Planning: Develop a plan for transitioning from the current state to the desired state.
  8. Implementation Governance: Establish a governance framework and implementation plan to ensure that the architecture is implemented as intended.
  9. Architecture Change Management: Establish and manage the processes, roles, and standards needed to ensure that changes to the architecture are made in a controlled and consistent manner.

TOGAF is flexible and can be tailored to the specific needs of an organization, and can be used to support a wide range of architecture efforts, including business transformation, IT transformation, and mergers and acquisitions.

Habib Lutfullah

What does a network engineer do ?

A network engineer is a professional who designs, implements, and maintains an organization’s computer network infrastructure. This infrastructure typically includes local area networks (LANs), wide area networks (WANs), and internet connections, as well as the hardware and software needed to support these networks.

Some of the key responsibilities of a network engineer include:

  • Designing, implementing, and maintaining network infrastructure such as routers, switches, and firewalls
  • Configuring and optimizing network devices and protocols
  • Monitoring network performance and troubleshooting any issues that arise
  • Managing network security by implementing firewalls, VPNs, and other security measures
  • Managing and configuring network servers such as DNS and DHCP
  • Managing and configuring network services such as email and web
  • Managing and configuring wireless networks
  • Keeping up-to-date with new technologies and industry developments in order to make recommendations for network upgrades and improvements
  • Collaborating with other teams and stakeholders to ensure that the network meets the needs of the organization.

Network engineers typically have a strong understanding of networking concepts and protocols, including TCP/IP, routing, switching, and subnetting, and they are familiar with various operating systems and network management tools. They usually hold a degree in computer science, computer engineering or other related field, and may hold certifications such as Cisco Certified Network Associate (CCNA) or Cisco Certified Network Professional (CCNP).

Habib Lutfullah

What does a network architect do ?

A network architect is a professional who is responsible for designing and overseeing an organization’s computer network infrastructure. Network architects typically have a broad and deep understanding of networking concepts and technologies and are responsible for creating the overall network strategy and architecture that aligns with the organization’s goals and objectives. They work at a higher level than network engineers and their focus is more on the overall design and direction of the network.

Some of the key responsibilities of a network architect include:

  • Developing the overall network strategy and architecture that aligns with the organization’s goals and objectives.
  • Identifying and assessing new technologies and industry developments to ensure that the organization’s network stays current and competitive.
  • Designing, implementing and maintaining the network infrastructure such as routers, switches, and firewalls.
  • Planning and designing large-scale network solutions that can support the organization’s growing needs and future growth.
  • Collaborating with other teams and stakeholders to ensure that the network infrastructure aligns with the organization’s overall IT strategy and architecture
  • Overseeing the work of network engineers and providing guidance and direction as needed.
  • Managing the budget and resources allocated to the network infrastructure and ensuring that they are used effectively and efficiently.
  • Providing technical leadership to other members of the IT department and the organization as a whole.

Network Architects typically have a advanced degree in computer science, computer engineering or other related field, and may hold certifications such as Cisco Certified Design Professional (CCDP) or Cisco Certified Internetwork Expert (CCIE). They usually have broad and deep knowledge of networking concepts, technologies and industry trends, and experience working with different vendor’s solutions.

Habib Lutfullah

What does a platform engineer do ?

A platform engineer is a software engineer who specializes in building and maintaining the underlying infrastructure and platform that enables software applications and services to run effectively. Platform engineers work to create and manage the infrastructure, tools, and services that are needed to deploy, monitor, and scale software applications.

Some of the key responsibilities of a platform engineer include:

  • Designing, building and maintaining the underlying infrastructure that supports the software applications and services.
  • Automating deployment, scaling and monitoring of the platform and applications
  • Building and maintaining tools for developers to easily deploy and manage their applications
  • Ensuring high availability and scalability of the platform
  • Managing and troubleshooting platform-level issues
  • Keeping up-to-date with new technologies and industry developments to make recommendations for platform upgrades and improvements
  • Collaborating with other teams, such as development, security, and operations teams to ensure the platform meets the needs of the organization.

Platform engineers typically have a good understanding of Linux, virtualization, network, storage and cloud technologies. They often hold a degree in computer science, computer engineering or other related field, and may hold certifications such as AWS Certified Solutions Architect or Google Cloud Platform Professional. They also have a good understanding of software development practices and tools, and programming languages such as Python, Go or Ruby.

Habib Lutfullah

What are some of the tech stack that Platform Engineer daily uses ?

The technology stack that a platform engineer uses can vary depending on the specific organization and the type of platform being built. However, here are some common tools and technologies that platform engineers may use on a daily basis:

  • Operating Systems: Platform engineers often work with Linux-based operating systems, such as Ubuntu, Debian, or Red Hat, to build and maintain the underlying infrastructure.
  • Virtualization and Containerization: Platform engineers may use virtualization technologies, such as VMware or VirtualBox, to create virtual machines that can run multiple operating systems and applications on a single physical machine. They also may use containerization technologies, such as Docker or Kubernetes, to package, distribute and manage application components.
  • Cloud Platforms: Many platform engineers work with cloud-based infrastructure and services, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP), to build and manage the platform.
  • Configuration Management: Platform engineers may use tools like Ansible, Chef, or Puppet to automate the deployment and management of infrastructure.
  • Monitoring and Logging: Platform engineers use tools like Prometheus, Grafana, and ELK stack for monitoring, logging and debugging the platform.
  • Automation: Platform engineers use tools like Jenkins, Travis or CircleCI to automate the deployment and testing of applications.
  • Programming languages: Platform engineers often use programming languages such as Python, Go, or Ruby to write scripts, automation, and tools.
  • Networking: Platform engineers work with network protocols and technologies such as TCP/IP, DNS, DHCP, Load balancers and firewall to design, implement and troubleshoot the network.

Note that this is not an exhaustive list, as the technology stack used by platform engineers can vary depending on the organization, the specific platform being built, and the needs of the organization.

Habib Lutfullah

What is the difference between a platform engineer and an SRE ?

A platform engineer and a site reliability engineer (SRE) are similar roles that both focus on building and maintaining the underlying infrastructure and platform that enables software applications and services to run effectively, but they have different emphasis and responsibilities.

A platform engineer is primarily responsible for designing, building, and maintaining the underlying infrastructure that supports the software applications and services. They focus on creating and managing the infrastructure, tools, and services that are needed to deploy, monitor, and scale software applications. They also work closely with development teams to ensure that the platform meets the needs of the organization.

On the other hand, an SRE (Site Reliability Engineer) is a role that emerged from the intersection of software development and operations, it’s a discipline that emphasizes the robustness, availability and scalability of software systems. SREs strive to improve the reliability, scalability, security, and performance of the software systems they operate, and they are responsible for the ongoing operation and improvement of systems, as well as the overall reliability and availability of services. They also have a strong focus on automation, monitoring and incident management.

In summary, a platform engineer is more focused on the underlying infrastructure, tools, and services needed to deploy, monitor, and scale software applications. While an SRE focuses on the reliability, scalability, and performance of the software systems they operate, and they are responsible for the ongoing operation and improvement of systems, as well as the overall reliability and availability of services.


End of Chat.

It’s very good. Interesting times ahead. I think it’s a better and more convenient form of gaining insight into a topic than doing Google Searches. One can use ChatGPT to gain knowledge first and then maybe dig deeper via google search results.

Does the tech industry need cadet programs similar to the army and the bureaucracy ? Is this a possible solution to the great resignation and the tech industry skilled worker shortage ?
In many countries at a young age cadets enter into the army with the view of a long term engagement. They are sponsored initially, trained a while and then they deliver later on. The important thing to note is that after all this they are committed in writing to serve multiple years.
Would it be fair to say that with the skilled workers shortage in the tech industry there is a requirement of cadet programs in the tech industry? Cadet programs where intake is made with a long term commitment. Where fresh people are taken in, skilled up and they serve a lengthy while ? This solves both the skills shortage and the employee churn in the current great resignation environment.
The relationship can be mutually beneficial for both sides just like it is in the army. A non-tech industry worker might be looking to enter into the tech industry while continuing to earn. The tech industry are also looking for people who can deliver. A six months or a year long on job training program with a view to start delivering on some tasks quickly and then develop further later. Again importantly the technology cadet program would have a proper written commitment to remain in the organisation for a couple of years at least. I think people looking to enter the tech industry will not mind such a commitment because of the remote work attraction in the tech industry. Its mutually beneficial.
This would convert every local McDonald’s worker and every Call Center Agent into a potential technology cadet available for intake. Anyone and everyone willing to enter the tech industry becomes an easy candidate.
I think its an option that should be available in the market.
What do you think ?

Analyst firms like Gartner have given indications that soon there will be digital assistants, digital employees, digital co-workers and/or advanced virtual assistants in the workplace. All these terms signify a similar trend. A technological bot like tool as a full co-worker. I will use the term Advanced Virtual Assistant.


In this write-up I intend to connect two phenomenon and give an outline of what an advanced virtual assistant for IP Networks might look like. The first phenomenon is Hey Google, Apple Siri and Amazon Alexa. These are personal virtual assistants which listen and respond to inputs from people. The second phenomenon is the open space available for a virtual assistant helping run IP networks and technology systems.


The basic example of a virtual assistant is that of Siri, Hey Google and Alexa. These are devices which listen and respond like a human. They require computation and processing to listen and calculate what is being said. Thereafter they require computation and processing to formulate a response and send it through. The I/Os are a mic to listen, a speaker to speak and communications over the Internet.


The way I see it is that every large enterprise and large network which is run by multiple network engineers could have a trained virtual assistant. Some of the basic repeated tasks of a Network Engineer are:

  • Run a ping or a traceroute on an IP
  • Login to a switch, router or firewall and run some commands
  • Look up an ISP circuits details from the ticketing system.
  • Look up routes in a router for a subnet
  • Look up empty IP address space in the IPAM. IP Address Management system.
  • etc. etc. etc.
    These are repeated actions regularly performed again and again by all Network Engineers throughout the world.

The question is: How could an Advanced Virtual Assistant help in these repeated actions?


From here on imagination starts. Imagine being able to speak to a bot on your work laptop. Lets call the virtual assistant Netty. Imagine being able to say Hey Netty, run a traceroute for 12.2.2.2 or Hey Netty, please you login to the route isbpk56. Imagine thereafter getting Netty to do this on the router: Hey Netty, run the command ‘show ip route 2.2.2.2’, or something like Hey Netty, run ‘show inventory’.


So basically, you have a Hey Google, Siri or Alexa type assistant for your network. It has been fed the networks IP addresses, limited rights login details, router names and other details as required. Now you are talking to it and it is doing some work. This is my understanding of ‘digital co-worker’ and ‘advanced virtual assistant’. An assisting device which you can talk to and it is able to execute the actions.

I feel that instead of large virtual assistants covering big domains there will probably be industry specific and audience specific virtual assistants. Even probably they will be company specific, trained and modified per organisation.


Let’s keep imagining. Imagine having a search option as well. For example: Hey Netty, search the network for any firewall rules on 10.62.45.3. This would be very handy when looking for some firewall rules. Or Hey Netty, search the network for the mac address. abcd.efds…


Similarly imagine getting a complaint for a circuit and saying Hey Netty, please bring up the circuit TX459’s details on the ticketing system. Or Hey Netty, search the network for the interface descriptions having the circuit TX459 in them.


I think this has a great potential to improve efficiency and productivity. If Network Engineers get a Hey Netty for their networks, they would probably be able to handle a lot of work very easily.


As expected, this will require work to get a conversational bot which is a good bot listener and a good bot speaker. There will also be search and response algorithms, databases queries and responses. etc. etc. That will be the work done to make Hey Netty possible. But it is an engineering exercise which would give much productivity increase especially for large networks run by large teams.


Similarly, the same is the case for Storage and Servers technology domains or even Telecom networks. The Storage engineers will have their own regular instruction sets, the servers and systems their own and telecom network their own. They could all get bot like virtual assistants trained to carry out actions.


Imagine saying this and getting the work done:
Hey Techy, could you show the details of the VM sydau78
Hey Telly, could you bring up the RF stats for the site melau86
Hey Netty, could you ping to the ip 3.3.3.3


I feel the industry leaders should step in to bring this into open standardisation efforts. So, for example Storage, Networking, Systems, Telecom industry leaders should each formulate teams to make standard Hey Netty like query-response lists. Thereafter bots could be trained for Hey Netty v1 and Hey Netty v2 as the query-response details come in from the standardisation body.
It is time for Hey Techy and Hey Netty to hit the technology space. It is time for the post-pandemic knowledge workers to also get advanced virtual assistants and digital co-workers. I feel this is a good time for standardisation efforts for these as well.


Thank you,
Syed Habib Kakakhel

A question very important is what will happen in the world when bandwidths become very high. Initially there was GSM call and DSL, then came 3g and fibre and now there are higher bandwidth fibre and 4g. There is 3g/4g and Wifi based fibre broadband in most places now.

To track the effects of bandwidths we can take a back to basics approach. It is that humans have eyes and ears. So it boils down to sight and speech. Therefore we can say that very high bandwidths like 100 Mbps in telecom and 1 Gbps on Wifi broadband will result in better things we can see, we can hear and talk to.

I feel 3D VR, AR and 4k, 8k video covers sight. In terms of speech and sound I feel voice operated devices and devices speaking interactively will become possible. Like Alexa, Siri and Hey Google.

Humans are limited in terms of seeing and hearing. Perhaps touching and neurological signals can be included but for now the eyes and ears are the primary interfaces that humans use to interact with technology.

Lets now take it further. With greater bandwidth and greater processing comes greater life-like responses from computers. So we there could be voice assistants assisting humans with all sorts of productive capabilities. Imagine you have a secretary and you can ask them to do stuff. Imagine now that there is a computer that is processor and bandwidth supported making it equivalent to a secretary and even more efficient. If one wants to schedule a taxi ask the device secretary, if one wants to schedule a flight ask the device secretary, if one wants to buy grocery ask the device secretary, if one wants to order something ask the device secretary. These are being called Virtual Assistants (Gartner predicts that, by 2025, 50% of knowledge workers will use a virtual assistant on a daily basis, up from 2% in 2019). So everyone will have a mini secretary in their smartphones and people who want to pay more will perhaps have sophisticated virtual assistants.

That’s ears, voice and speech.

Lets discuss eyes and sight. With greater bandwidth and greater processing comes the ability to show 3D life-like views to people (if they wear a headset and enter that domain). So perhaps a 3D camera will take in a 3D view of a meeting room and given sufficient bandwidth it will transfer that 3D view to another place. Someone wearing a headset can see a 3D view of any place and will really feel like he is there. Voice can be part of the transfer so people could wear a headset and be seeing and hearing a 3D view of a far of place. Instead of a 2D video call it’ll be a 3D view call. This makes way for new avenues in human interaction. So imagine an MIT classroom having a 3D camera and anyone in the world attending the class in 3D view via a headset. It will feel very much like a real class attendance and will definitely help learning. I think it will make a much better learning experience than the 2D video one because of its feel and presence effect. Its easy to get distracted into something other than what you are seeing on your 2D screen but a 3D view would be immersive learning. This is being called immersive learning and experiential training because it is immersive in experience and is not just sight but is an experience.

Enter The Matrix. You wear a headset and someone else wears a headset and they could be in a 3D virtual world together. Lets not make them fight here on the blog but two people being together in 3D view is a very real possibility given sufficient bandwidth and processing.

So now lets break this down. Consider a 3D live transmission and people attending in 3D view that live transmission. As in an MIT Class which people across the world attending in 3D view LIVE. Now separately consider non-live transmission. A virtual 3D world which 2 or more people are present in together which is a recorded experience. For example a recorded Safari trip which is not live but people jointly attend it. Now separately consider a computer generated 3D world in which there is a virtual assistant like a device secretary present as well. Someone wearing a headset is taken to a computer generated 3D virtual world where they are with a virtual assistant, are talking to them and learning from them etc. Now further consider 2 or more people being in a computer generated 3D world together and there being a virtual assistant present in that world as well which they can speak to. Interesting possibilities.

These are real possibilities given sufficient bandwidth and processing. I feel 5G and Wifi 6 causing bandwidth increase can result in these. Simultaneously edge computing and cloud computing causing increased processing can bring these things to life. These seem like real possibilities and things are progressing in this direction.

I came across a job ad for a Network Consulting Engineer. Its worth a blog because it shows whats happening in the market. The Ad states:


Alpha (renamed) is looking for an experienced Network Consulting Engineer to join during this time of growth.

Key responsibilities are set out below:

  • Consult with clients to design and implement network security technologies for example next-gen firewall, remote access, network access control, SaaS and public cloud networking and security services.
  • Work with a keen eye for detail within a network and security context across public, hybrid and private cloud environments
  • Proven network engineering project background with knowledge and skills in analysis, design and implementation for some or all of the following areas: LAN, SD-WAN, WLAN, Cloud and Network Security Technologies
  • Technical oversight for complex projects incorporating multiple technology streams.

Comprehensive subject matter expertise with at least 3 of the following technologies:

  • Routing and Switching solutions
  • Firewall and Network Security technology
  • Wireless LAN infrastructure and RF Design
  • SD-WAN and SASE Solutions
  • Network Access Control and Authentication systems
  • Cloud Networking technologies – AWS and Azure

The additions to the networking landscape is obvious. The old is still present. Routing and Switching for covering IP routes, LAN for L2 switching and Firewalls. These are stable technologies covering layers 2,3,4 communications and network security. They include MAC address tables, vlans, IP addressing, subnets, route tables, LAN, WAN, SP VPNs, Core/Agg Data Center, TCP/UDP port filters and access-lists amongst other things. This is basic networking.

Further after this is Wireless LAN infrastructure and RF Design. This covers Wifi Access Points and WLCs. So for example an enterprise could be upgrading there wireless coverage. To cover an area like an office RF design planning is required which includes RF signal strength considerations for AP placement in floor plans.

Further after this is SD-WAN and SASE. These are the new blokes in town due to changes in the landscape of networking. Two things cause SD-WAN:

  • Private MPLS WAN links being expensive
  • Public internet connections having become faster

Due to this WAN is shifting from MPLS VPN SP links to internet back-hauled links. This requires:

  • new branch edge devices
  • new branch edge design to include internet back-haul
  • new HQ design to include internet and SD-WAN HQ networking

That’s SD-WAN.

SASE is a Gartner term caused by new traffic sources and new traffic sinks. The enterprise traffic patterns have changed a bit. Traditionally all enterprise traffic would come to the HQ via a WAN and then it would go to the internet via a firewall from there. Enterprise applications would also be hosted in the HQ. Now increasingly cloud based applications are directly being accessed from the branch via internet connections and connections to the public cloud. This means new traffic patterns are present where the branch edge is talking to cloud applications directly without the HQ in between. Enterprise applications are now also in the cloud instead of an HQ PoP and the edge is the branch. Gartner saw this new traffic pattern and suggested that there needs to be secure access at the service edge in the branch (SASE).

For any enterprise which wants to move towards an internet based WAN network and has also moved its applications to be cloud-based then their whole traffic patterns have shifted away from the HQ PoP. This means these enterprises will go for SD-WAN and SASE solutions to use internet-backhaul and to secure the service edge. This is all new but makes sense and with sufficient push they can be implemented. Most work required for these will be:

  • new branch designs for SD-WAN internet back-hauled branches and
  • new HQ POP design to include internet-based SD-WAN HQ.

There are SD-WAN vendor providers which offer these MPLS WAN replacement solutions to either augment the MPLS WAN or replace it.

Most of the SASE work will be solution evaluation and vendor assisted implementation via GUIs as I see it.

Next on the list is Network Access Control and Authentication Systems. As I see it this is 802.1x, Radius Authentication and could also include Active Directory based Authentication/IAM. From a networking perspective 802.1x will need to be enabled on the LAN ports. Perhaps Radius based authentication will need to be integrated in systems and perhaps firewall rules will be required for Microsoft AD access.

The final item on the list is Cloud Networking which includes AWS and Azure connectivity. From my experience in hybrid and multicloud deployments much of the work is integrating new links into existing networks and configuring routes establishing IP connectivity between new endpoints. So some IP subnets and IP endpoints are on-prem and some are in the cloud. Routes are required and links are required. It still requires Layer 1 networking with new links and SFPs for say Megaport or Direct Connect. It still requires a semblance of Layer 2 where across the megaport or direct connect layer 2 is reachable. On top of these routing reachability is established with routes to be added in the relevant locations. So for example an on-prem firewall could be an L3 routing SVI location and routes pointing towards the cloud would be added there. Similarly routes would be added to routers and propagated so as to provide the cloud subnets reachability inside the on-prem network. On the other side in the public cloud side the VPC/Vnet would have routes pointing this way. There would be configuration items establishing the link between the relevant AWS direct connect or Azure expressroute to the VPC/Vnet. If its multicloud there could be multiple AS’s involved and BGP would be used to establish route exchange.

Most work on AWS/Azure side is GUI based unless Infrastructure as Code is used for large deployments. If infrastructure as code is used then Ansible, Terraform, Git and Bitbucket will be used instead of the GUIs to configure the routes and the the firewalls in the public clouds. Infrastructure as code has two main sections, one being version control of the config code and the other being the deployment of updated code to the public cloud.

The other non technical requirement in the job ad is:

  • Development of high quality technical documentation including, HLD, LLD, implementation plans, templates, standards, and knowledge base articles to assist with knowledge sharing for networking solution.
  • This is standard MSP or enterprise project work where design documents, implementation plans, standard configs and wiki articles are part of the project.

    Together all this makes a Network Consulting Engineer.

    Recently I was investigating Systems Modeling and stumbled across the Archimate language for enterprise architectures. Archimate is a classic case of defining a lexicon (language) for modeling System of Systems.

    An Enterprise Architecture is a Systems of Systems which has Business, Application, Architecture, Technology and Physical aspects to it amongst many other aspects. These are all full systems in themselves and are integrated together to provide one larger system.

    Wikipedia relates the following on System of Systems:

    Current research into effective approaches to system of systems problems includes:

    • Establishment of an effective frame of reference
    • Crafting of a unifying lexicon (language)
    • Developing effective methodologies to visualize and communicate complex systems 
    • Formal modelling language with integrated tools platform

    https://en.wikipedia.org/wiki/System_of_systems

    Archimate does all of the above. It provides an effective frame of reference calling it views and viewpoints. It provides a unifying lexicon with all of its terminologies as listed in it specification. It is a formal modeling language and provides effective methods to visualize and communicate complex systems. There are also tools and software based on the archimate language to define and display architectures. Therefore Archimate is effectively a System of Systems implementation language for the domain of Enterprise IT systems.

    It’s specification is linked below and is very intuitive. It makes a good read for someone experienced.

    https://pubs.opengroup.org/architecture/archimate3-doc/toc.html

    In Computer Science the notion of Abstraction Layers is very common. A simple explanation of this is that Electronics is an abstraction layer below RAM and above RAM is the layer of a C++ memory read. In between the C++ Memory Read and RAM there are many layers upon layers of software and hardware structures.

    It is observed that these abstraction sayers are not wholesomely tracked. A lack of tracking these is very sad because while we have at our fingertips the maps of the world down to the streets and houses but not a map of abstraction layers.

    There are terms in different fields such as:

    OSAL: Operating Systems Abstraction Layer.
    HAL (software): Hardware Abstraction Layer in Unix.
    DBAL: Database Abstraction Layer
    NAL: Network Abstraction Layer

    It’s clear that there are abstraction layers everywhere in computer science. In hardware, in software, in networks, in databases etc.

    In the interest of the advancement of computer science I believe an Abstraction Layers Foundation should be established. A body which would analyse the breadth and depth of abstraction layers. A body which would analyse the interaction amongst abstraction layers.

    I think that with the advancement of computing systems there exists the space for a new discipline called Complex Systems Abstraction Layer Engineering which should cover abstraction layers tracking and interactions.

    I justify this concept based on the increase in interdisciplinary skill set requirement. For example it is observed that whereas earlier there was Front-End Engineer and Back-End Engineer there now exists an entity called Full Stack Engineer capable of both front end and back end. Similarly it has been observed that whereas earlier there were System Administrators and Network Administrators there is a push in the industry to break these isolated entities here too. Similar stuff must be going on with databases and storage.

    It is observed very clearly that one of the distinguishing factors between an experienced engineer and a new one is the knowledge of abstraction layers of the existing system. It appears experience in IT goes hand in hand with identifying the interaction of the abstraction layer you are working in with other layers. For example when a software engineer does software development he writes a piece of software for one component in a larger picture. Over time this software engineer becomes an enterprise architect based on a thing called experience. I feel that much of this thing called experience is abstraction layer knowledge. With experience using time this software engineer identified the relationship between his code and his application and the larger system. Using time he gained knowledge of the multiple layers in the system and how they are interacting. This knowledge of multiple layers and their interactions and dependancies is then used by the later enterprise architect to formulate the architecture of a different system. This is experience but it is also abstraction layers scale knowledge.

    I feel that one big distinguishing factor between an Architect and an Engineer is abstraction layers knowledge. The architect has knowledge of the different layers in the system and the interactions amongst them. The engineer doesn’t.

    I feel that an Abstraction Layer Foundation would help to identify and track the multiple abstraction layers that exist. It would then help link systems scale knowledge.

    We are heading towards extremely complex systems and systems upon systems in IT. I feel tracking the breadth of the multiple abstraction layers in the entire computing domain under one body with the goal of identifying relationships will help the advancement of computer science.

    One thing that is extremely important is that this foundations work should be model based and not document based. There is a new field called Model Based Systems Engineering.

    ” Model-based systems engineering (MBSE), according to INCOSE, is the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.

    It is a systems engineering methodology that focuses on creating and exploiting domain models as the primary means of information exchange, rather than on document-based information exchange.

    MBSE methodology commonly used in industries such as aerospace, defense, rail, automotive, industrial, etc” Wikipedia

    To track something complex documents are not used. Text documents have their limits. Software structure based models are being used to track complex systems under the field of Model Based Systems Engineering – MBSE. The same should be done by the Abstraction Layers Foundation for complex IT systems. To develop correlated linked models of IT systems.

    First some definitions and news relating to the subject:

    Process Mining:

    Process mining is a family of techniques relating the fields of data science and process management to support the analysis of operational processes based on event logs. The goal of process mining is to turn event data into insights and actions.

    Digital Twin:

    A digital twin is a virtual representation that serves as the real-time digital counterpart of a organization or process.

    News:

    https://newsroom.ibm.com/2021-04-15-IBM-to-Acquire-myInvenio-to-help-Organizations-Use-AI-powered-Automation-to-Better-Streamline-Business-Processes

    Background:

    myInvenio is a disruptive solution of Process Mining and Digital Twin of an Organization to automatically analyze business processes, monitor and predict the future trends of processes.

    Details:

    “Addition of myInvenio provides customers with a comprehensive AI-powered automation portfolio for business automation – including process mining, robotic process automation (RPA), document processing, workflow, and decisions”

    Notes:

    Given the above details regarding Process Mining within an organization and creating a Digital Twin of an organization and then moving towards greater efficiency it is obvious that Network Automation is one piece of the puzzle. Efficiency of the whole organization depends on analyzing the processes and then putting the right kind of automation in place. Reading up on another article I saw that one method of process mining is to conduct interviews across organization to analyze what the staff do and to analyze the human part and the machine doable part. RPAs, Network Automation, Systems automation are one part of the puzzle. Gartner is calling this Hyperautomation.

    Details:

    Hyperautomation is the application of advanced technologies like RPA, …. and Process Mining to augment workers and automate processes in ways that are significantly more impactful than traditional automation capabilities. Hyperautomation is the combination of automation tools to deliver work.

    Gartner’s report notes that this trend was kicked off with robotic process automation (RPA). The report notes that RPA alone is not hyperautomation. Hyperautomation requires a combination of tools to help support replicating pieces of work where a human is involved in a task.

    Network transformation from a legacy HQ POP and WAN architecture to a more modern public cloud and secure access architecture is an interesting project. It entails migrating connectivity and security to new solutions. I recently came across a job advertisement for such a transformation project. The Job advertisement had the following list:

    • Build a resilient WAN environment in Megaport with Megaport Cloud Routers,
    • Connect on-prem data centres in Equinix
    • Megaport Cross Connects, migrate Azure ExpressRoute and AWS Direct Connect to Megaport.
    • Migrate on-prem data centre Internet to an ISP peering with Megaport with DDoS protection and migrate zScaler GRE tunnels to new Internet link.

    The above is the connectivity addition and migration towards the public clouds.

    The job ad continues:

    • Build physical Palo Alto firewalls for office locations and on-prem data centres (all in Sydney),
    • Build Panorama management solution and integrate with all Palo Altos including the ones currently hosted in Azure regions.
    • Redesign and build firewall policies with user-based firewall rules and Zero Trust Model.
    • Build Prisma Cloud and Prisma Access environment and migrate forward proxy from zScaler to Prisma for users and servers.
    • Build new RADIUS/NPS servers with Azure MFA and configure all network appliances to use these servers with RBAC policies and MFA prompt for admin privileges.

    The above is the new secure access (SASE) solution work to be done – Secure Access Service Edge.  

    And then the job ad lists:

    • Decommission Imperva DDoS, zScaler, legacy Cisco and Juniper firewalls and routers.

    There goes the old stuff. After the migration to public cloud and the SASE solution the previous network devices which aren’t needed are being decommissioned.

    The skills required are listed as:

    Primary Skills Required = Palo Alto (firewalls, Panorama, Prisma and Global Protect), Juniper SRX, Cisco Switching and Routing, Azure Networks, Citrix Netscalers (for load balancing and failover), Python scripting

    Ancillary Skills Required = Splunk (syslog integration and queries), Azure automation, CI/CD with Azure DevOps Additional Skills Preferred = Megaport, RADIUS/NPS

    These are the skills which the network engineer needs to carry out the transformation. Heaven knows how many will already have those as these transformations are a bit new.

    Networking is transforming. This takes the enterprise’s network to the next generation of solutions available. The two key items are Public Cloud and SD-WAN based secure access. So the enterprise transforms to adapt to the new traffic flows and the new traffic patterns. Traffic patterns and flows which are different in terms of sources and sinks. They now include sinks/sources to be hosted in Azure or AWS (in addition to on-prem) and a remote workforce. These new dynamics also require a new security solution as well which is very much different from a simple firewall perimeter in the HQ POP earlier. Therefore and SDWAN based Security layer is added.

    Copied:

    Secure access service edge (SASE) is a network architecture that combines VPN and SD-WAN capabilities with cloud-native security functions such as secure web gateways, cloud access security brokers, firewalls, and zero-trust network access. These functions are delivered from the cloud and provided as a service by the SASE vendor.

    Let’s now dig a bit deeper.

    The first part is L3 routing BGP style and all of that related work. Given again here so that we can first divide and conquer the connectivity related items:

    • Build a resilient WAN environment in Megaport with Megaport Cloud Routers,
    • Connect on-prem data centres in Equinix
    • Megaport Cross Connects, migrate Azure ExpressRoute and AWS Direct Connect to Megaport.
    • Migrate on-prem data centre Internet to an ISP peering with Megaport with DDoS protection and migrate zScaler GRE tunnels to new Internet link.

    Here is a good picture from Megaport documentation:

    Source: https://docs.megaport.com/mcr/route-advertisement/

    One thing to note is the BGP ASN numbers. The red are the Megaport Cloud Routers (MCR) and so MCR are a BGP hop with their own ASN. We are looking at BGP peering configurations between on-prem and the MCR and between the MCR and the public cloud. This means 4 BGP configurations. One in the on-prem device facing the MCR (firewall or router), another 2 on the megaport MCR one facing on-prem and one facing the public cloud, another 1 facing the public cloud.

    This is in its simplest form. The figure shows more because there could be private subnetworks hosted with the public cloud. These might have separate BGP neighbourships. So for example multiple Azure VNETs. If there are multiple public clouds like both Azure and AWS then each will have separate routing configured at the MCR as well.

    The job ad again:

    • Megaport Cross Connects, migrate Azure ExpressRoute and AWS Direct Connect to Megaport.
    • Migrate on-prem data centre Internet to an ISP peering with Megaport with DDoS protection and migrate zScaler GRE tunnels to new Internet link.

    So new cross connects, new SFPs, new fibre optic cables and new port configs. Then migration cutover of traffic from ExpressRoute and Direct Connect to the MCR. This would be route changes. With the MCR Megaport all set up the next hops will be cut over for traffic to travel via megaport instead of the old express route etc. Any GRE tunnels will be reconfigured in a cutover change window too.

    Now let’s move on to the newer security phenomenon which is securing the remote edge and devices.

    For this part the job ad states:

    • Build physical Palo Alto firewalls for office locations and on-prem data centres

    This is simply a firewall rollout. Installation and integration into the network. Palo Alto documentation is the best friend. Some vendor help might be required which comes when you buy stuff.

    Jod ad again:

    • Build Panorama management solution and integrate with all Palo Altos including the ones currently hosted in Azure regions.

    What is Panorama? Here it is:

    Panorama is a centralized management system that provides global visibility and control over multiple Palo Alto Networks next generation firewalls through an easy to use web-based interface. Panorama enables administrators to view aggregate or device-specific application, user, and content data and manage multiple Palo Alto Networks firewalls—all from a central location.

    So it’s a GUI. A centralized Management Server. Therefore it is a software application which will need to be installed on a VM or a server. It will need to be configured to add all the Palo Alto firewalls rolled out above. The GUI/Server will need to be added to the network and assigned an IP and it will need IP level connectivity to each firewall to administer it.

    Moving on another part of the job ad:

    • Redesign and build firewall policies with user-based firewall rules and Zero Trust Model.
    • Build Prisma Cloud and Prisma Access environment and migrate forward proxy from zScaler to Prisma for users and servers.

    What is Prisma Access?

    Prisma Access provides a network of cloud-based next-generation security gateways that secures traffic. Mobile workforces are distributed around the world, and Prisma Access for mobile users establishes points of presence for them to use.

    Prisma Access works together with the GlobalProtect agent/app on laptops and mobile devices. When a remote user has internet connectivity, the GlobalProtect app locates the best gateway available for the user’s location and sets up an IPsec/SSL VPN tunnel. All traffic passes through Prisma Access.

    Therefore we can say that it is a secure gateway solution for the remote and mobile workforce. The zscaler for each remote worker.  In essence it appears to be an enforcement point where a client on the devices sends traffic to Prisma Access and then Prisma Access secures it. It will use SD-WAN and application level access policies under the hood.

    Prisma Access brings protection closer to users so traffic doesn’t have to back-haul to headquarters to reach the cloud. Prisma Access ensures Zero Trust Network Access (ZTNA) with service and application-specific access controls.

    I think practically this will require proxy connectivity configurations to be put in place and for policies to be added and removed via Palo Alto management portal. From looking at the Palo Alto documentation on configuring Prisma Access it appears it has mobile devices onboarding, Panorama connectivity onboarding amongst other things.   

    What is Prisma Cloud ?

    Prisma Cloud is the industry’s only comprehensive Cloud Native Security Platform (CNSP) that delivers full lifecycle security and full stack protection for multi- and hybrid-cloud environments.

    Cloud Security Posture Management (CSPM) Prisma Cloud provides:• Visibility, compliance, and governance» Cloud asset inventory» Configuration assessment (runtime)» Compliance monitoring and reporting » Infrastructure-as-code (IaC) configuration scans (IDE, SCM, and CI/CD)• Threat detection» User and entity behavior analytics (UEBA)» API-based network traffic visibility, analytics, and anomaly detection» Automated investigation and response

    Cloud Infrastructure Entitlement Management – Prisma Cloud provides:• Permissions visibility• IAM governance• Automated response• User and entity behavior analytics (UEBA)

    Prisma Cloud seems to be a cloud access security point. A developer, development environment and application security checking point. So whereas Prisma Access was securing the remote workforce the Prisma Cloud secures the cloud deployment by checking the users and entities accessing the servers in the public cloud. The Entitlement Management is a high priority thing for the public cloud where it is functioning as an Identity and Access Management permissions checking point.  From the above it is also doing IaC config scans.

    I think practically this is also a GUI based users, account, permissions addition and removal. There are accounts and groups and roles and policies to make. It is like the Active Directory or Cisco ISE of the public cloud security.

    Job ad again:

    • Build new RADIUS/NPS servers with Azure MFA and configure all network appliances to use these servers with RBAC policies and MFA prompt for admin privileges.

    What is this about:

    The Network Policy Server (NPS) extension for Azure AD Multi-Factor Authentication adds cloud-based MFA capabilities to your authentication infrastructure using your existing servers. With the NPS extension, you can add phone call, text message, or phone app verification to your existing authentication flow without having to install, configure, and maintain new servers.

    The NPS extension acts as an adapter between RADIUS and cloud-based Azure AD Multi-Factor Authentication to provide a second factor of authentication for federated or synced users.

    So this is a security 2 factor authentication setup for the network nodes. It uses Radius protocol for the user/permissions exchange. An extension appears to be requiring installation and here again I think users and roles and permissions and accounts will need to be setup or perhaps they will be integrated from AD.  

    That’s it. A two part work required to connect to the public cloud and to put in a new security solution. One part needs BGP routing and network configs. Another part needs firewall deployment and policies and users administration. Enterprise Transformation done right.

    8 years ago in March 2013 I covered NIST’s definition of Cloud Computing :

    https://networkfuel.blog/2013/03/12/breaking-into-the-field-of-cloud-computing-networking/

    Now after a while it’s time for NIST’s Cybersecurity Framework. They have done a very good job again with this one. One of the best documents out there to manage your organizations cybersecurity is definitely the NIST framework. It covers the breadth and depth of the requirements from cybersecurity teams.

    This is the excel link: https://www.nist.gov/document/2018-04-16frameworkv11core1xlsx

    With 108 Titles it seems daunting but trust me its very intuitive. It has been divided into simple and intuitive sections. These are easily understood by anyone who has worked in networks and systems and large to medium scale orgs which practice these.

    Identify – Protect – Detect – Respond – Recover

    The details are below:

    Once you grab this it is easier to understand all the sections of the framework. To implement this in any organization it is necessary to spend time and effort on each of them and carry out the exercise required for each category and sub category.

    This is important and not practicing the correct methodology could cause an incident in the organization and would either leak data or cause damage otherwise.

    For this blog I have chosen to specify four areas which exist in the globe. There is Research and Development going on in academic knowledge hubs and some large organizations. There are products being made and sold by organizations. There is a service being provided by organizations, There is knowledge being sold by organizations.

    Research is mostly being carried out by academia and universities.

    Products are being made and sold by organizations based on market demands.

    Services are provided by organizations based on market needs.

    Knowledge is sold by organizations and individuals.

    To do research people dive deep into a problem to solve it and invent a solution. To make a product an exercise in engineering and manufacturing is carried out to make something. To provide a service you have something which you own or can do and other people use it and pay you for the use. For knowledge people or organizations know something and they sell their knowledge and analysis of it.

    What is more important. Invent; Make, Serve or Know.

    As things stand:

    • Some people are inventing as they have the budget to invest people into researching and inventing.
    • Some people are making because they have the source materials, the manufacturing capability and the people to make that thing.
    • Some people are providing a service because they have the people and equipment available to provide that service.
    • Some people are selling knowledge and analysis because they have that knowledge and are capable of providing an analysis on it.

    As IT evolves it appears feasible that pretty soon the majority of people on the planet will soon have a very capable smart device with them. One thing that is lacking still is a consensus global identification app which could be utilised to identify a person across countries. This only requires coordination and enablement. Already a global ID system exists in the form of Passports. Another unique global system which exists is the phone numbers system which have country codes and numbers unique in the world.

    Will it ever happen that a smartphone that a person carries will have an epassport and a phone number as an ID which can be used at airports. Here is my smartphone, it has my passport, my base country ID, my base country driving license, my phone number and here is me. In addition to that here is my linkedin, my facebook, my twitter and here is me.

    The international credit card system is a very good example of coordination. It has a number, a name, an expiry and a CVC number. It is unique and can be used globally online.

    There could be a similar global ID card and a global eID which works. If anyone opts in they will be a known Citizen of the World. If anyone opts out they will a disconnected individual who prefers some privacy.

    Cost of SaaS and SaaS Operating Income should always be the main concern when making an investment in SaaS or buying SaaS. Power, People, Vendors, Equipment. That sums up the game pretty much. Power as in Electricity cost per unit, people as in cost of manpower, vendors and equipment as in the manufacturers and organisations providing you the equipment.

    Cloud computing was initially coined to serve as Infrastructure as a Service (IaaS) purposes where you will have access to virtual machines running on in offsite cloud providers data center. This gave the advantage to organizations of not taking the pain of managing hardware and Infrastructre like routers, switches, servers, storage boxes etc. Clouds then evolved to include Platform as a Service where you would get containers and kubernetes services. This would essentially be a lighter platform more lightweight than virtual machines and henceforth cheaper. Clouds then further evolved to add on FaaS – function as a Service where very light instructions could be sent and processed and results could be served. This would further minimize cost by having an even lighter ‘booked-reserved’ infrastructure from the pool and being even lighter.

    In all cases – IaaS, PaaS or FaaS one thing is common and that is the fact that you will still need to develop an ‘Application Software’. If you want to develop and deploy your application on VMs like Windows Server or Centos/Redhat you can go for Infrastructure as a Service. If you want to develop and deploy on containers you can go for Platform as a Service – PaaS. If you want to use a public clouds Functions as a Service offering then develop and deploy custom code using the APIs exposed by the cloud and make an application. In any case you need to develop a software stack which will be the Application.

    The benefit of SaaS is that you will get an end product already developed software ‘Application’ all ready to use.

    This is the best the cloud can get to. Click and start running a software application as per your need. Work and pay as you grow on the application. A software which is online and managed by a vendor which is simply hosted on a cloud somewhere and you are using it. What else could be easier. Your main business aim is to have an Application. That is why you buy IaaS, PaaS or FaaS. You will still need to develop and deploy an application on these three. But if you opt for SaaS then you get a service immediately.

    To that end it is important to get the Cost model right with your SaaS vendor and make sure you pay them a reasonable profit and get a reasonable cost service. The SaaS vendor should have deployed their service preferably on a FaaS or PaaS from one of the main clouds. Or they should have a stable infrastructure of their own. But their Power Usage Efficiency – PUE should be cost optimised so that their infrastructures power bill is low not high. Their ops costs should be optimised and their Total Cost of Ownership should be cost optimized.

    SaaS is the best the cloud can offer and an organization must select a good SaaS provider. I am not aware of a SaaS inter-vendor protocol where you could migrate from on SaaS to another SaaS easily. This would be a strange thing in any case if it existed.

    Lets imagine your SaaS providers Costs –

    Data Center Power cost (Electricity), UPSs etc, Routers, Servers (CPUs, RAM, HD), Real Estate cost, Network Engineers, Systems Administrators, Operating System (Linux/Redhat etc), Virtualization (VMware/KVM etc), Fibre Optics SFPs, Fibre Optic Cables, CPUs, Data Center Operations Staff, Software Stack Operations Staff, Platform Operations Staff, Network and OS Operations Staff.

    List of sample SaaS cost factors above.

    On top of all this they are offering you a service online for a cost. At the back-end they are supposedly managing this all for you. Now imagine that this SaaS offering is being offered from a location where the power bill is high, where the operations staff bill is high. Imagine they are using say a costly virtualization platform or imagine they are using costly routers and switches or costly servers. Or imagine even that they are using costly SFPs and cables or even costly UPS battery backups and costly generator backups or costly other things. This would all add to the CoSaaS – Cost of Software as a Service offering. To minimize CoSaaS – Cost of Software as a Service the SaaS vendor needs to be using a highly optimized structure for all of the above list of sample SaaS cost factors. If they arent it’ll all boil down to a costlier SaaS offering for you.

    Therefore it is important to weigh your options. Do you have time to develop application software ? do you want to buy an IaaS, PaaS, FaaS offering and have the time to make a software application on top of these ? Do you instead want to just get a service and pay for it via Saas model ? Do you have the resouces to build a date center and house the OS, virtualisation etc on it ?

    If you want to just get a service and pay for it via SaaS then it is important to ask pointed questions from your SaaS vendor as to what their Power Bill is like ? What vendor Switches/Routers are they using and where are they sourced from ? What vendor servers are they using and where are they sourced from ? Are their routers and servers cheap or costly ? Where are their operations teams located and is their Ops teams bill costly ? Ask them clearly where are your Data Centers located and are they somewhere where power is very expensive ? Because these things run on power you know. Which virtualisation platform are the SaaS provider using and does it have a costly license ?

    Now it is simply a time to compare TCO – Total Cost of Ownership. Simply make a list of your own Cost and Time factor. Do you have the Time and Cost and resources to Make and Build the equivalent of this Software as a Service in-house. Integration is also an engineering effort and so do you want to buy the above list of items from multiple different vendors and integrate them yourself in an in-house fashion and then run an application on it ? How much would all the above list cost you ? How much is the Power Bill in your area, Real estate bill in your area, License cost and routers and servers cost which you would pay for. In the end you can do a cost comparison. If you were to build, make, integrate, source and deploy whatever you want to do in-house then roughly how much would it cost. In comparison how much is the SaaS vendor asking for.

    An organization may simply do a project planning cost finding exercise and find out how much engineering and resource costs are needed to do something in house and how much in comparison the SaaS vendor is charging.

    I feel slowly organizations small and large will slowly move to SaaS offerings from major providers. Major providers source cheaper equipment, they get better deals from their vendors, they get better power bill locations and power deals, they do 24/7 operations globally from various locations to get cheaper operations staff costs. In the end they will offer a SaaS stack for a cheaper price.

    To that end I think privacy dies here and you are out of luck if privacy is a concern. The SaaS vendors data center may be far off in another country where the power is cheaper and your data is therefore at that location. The SaaS vendors teams may be located across the globe they run your application and well your privacy is dead.

    If indeed privacy is a concern and you want to seriously do things in house than one good option is to try to use Nextcloud. It is an open source SaaS stack which is a good service and can help in privacy and locality requirements.

    I feel SaaS is important as in the end you need a software application and want to use that.

    I hope you find this blog post helpful.

    Thanks

    Habib

    In IT there are different types of works. Not everyone realises this and not everyone knows which field of work they are in and which they are not in. For example an operations engineer who has worked in operations for a considerable time of his career might find it difficult to adjust in a deployment role or an architecture position. If you are in one line generally the hiring party knows your line and try to select a person from within that line. There is a difference between running an already built system and setting up a system. There is a difference between setting up a system already designed and designing of a new system. The design of the system is dependant on the requirements from that system and the tools and protocols you have at hand. The requirements of the application dictate the design. This work of requirement analysis and design is different from installing the system. Furthermore design and install are both different from running a system continuosly. These are all different skills and some people know all three and settle in one while some people know only one and work within that one. Generally these are titled Operations which is running a system and Deployment which is installing a system and Architecture which is designing a system.

    Installation and Deployment fall under project execution work and Architecture and Design fall under project planning. Running a system and operations is generally considered non-project. It is good for an engineer to know the line of business he is in and choose to either intelligently acquire further skills within his line or acquire the skills of another line and move into that one. An Operations engineer might work in Ops for a few years and learn design skills and try to move into Deployment Project work. He may then move onto Design and Architecture work. Operations work are normally 24/7 all week 365 days of the year and this require weekend shift and night shifts and oncall work. Project and Deployment work are generally day time office hours work but the site installation work is somtimes done after hours in a planned maintenance window. Architecture and Design is mostly 9-5 business hours work. Some operations roles are now done internationally in a follow-the-sun manner across countries and across timezones. This means that in one country when it is daytime their engineers are on call and are running the system and after sunset another country wakes up and engineers in that timezone are handling the operations in their daytime. This is called follow-the-sun operations and in organisations running like this ops work also is in daytime only. Normally these are large organisation spanning the globe with presence in multiple countries.

    Habib

    I came across a job ad titled Systems Reliability Engineer which turns out to be a sort of a hybrid engineer skillset. Its details are copied below. Bear with me while I break things down.

    The hybrid part in this is that it requires a combination of:

       – Linux/Unix/Virtualisation which falls generally under SysAdmin roles.

       – Networking which falls under Network Engineer roles

       – Storage and Server which generally falls under Storage/Backup Engineer

       – Kubernetes which is a container orchestrator and will provide a platform for a distributed application. This is a new field but I think its safe to say that Devops Engineer or Platform Engineer role titles handle this responsibility.

       – AWS/Azure/GCP Cloud which are Public cloud IaaS, PaaS or FaaS services.  This falls under Cloud Engineer or Devops Engineer.

    A combination of the above knowledge bank is required to function as a Systems Reliability Engineer here.

    And so we can say that a Systems Reliability Engineer is composed of a SysAdmin, Network Engineer, Storage Engineer, Devops Engineer, Platform Engineer and Cloud Engineer.

    Can we break this down a bit more?

    Starting with the Application workload, suffice to assume that the heavy weight applications which this guy will support require a networked distributed system to run. They are cloud native microservices based applications requiring a networked distributed system to run. The application needs CPU cores, RAM, Storage, IOPS, Bandwidth at such a scale.

    Digging in further it can be observed that the individual components require an OS and Virtualisation (Linux/Unix/KVM etc – Sysadmin). Networking these individual components require L2/L3 networks (Routers, Switches – NetEng) and further on what can be called a Distributed System OS is required which will present not individual components but the servers/OS/router/switches/vswitches/ storage combo to the application. Kubernetes can be said to be the Distributed System OS providing orchestration and management of namespaces/containers. A distributed file system and storage servers will also be present. Certain parts of the application may be interacting with public clouds (AWS/Azure/GCP) to run certain workloads on public cloud instead of on the local infrastructure.

    Oh dear, what a combination of knowledge bank and skillset this person needs!

    In Computer Science we work on the principle of Abstraction Layers where there are layers which have science and phenomenon within themselves and then they provide a function or service to another layer. And so the whole system is composed of multiple Abstraction Layers interacting with each other. In this case this Systems Reliability Engineer requires knowledge spanning multiple Abstraction Layers. Traditional engineers have been functioning within their own Abstraction Layer. Their specific jobs have been complicated enough to require tips and tricks of that same abstraction layer to make things work. An engineer working in the networking abstraction layer knows how to troubleshoot links, routing, SFPs etc and an engineer working the SysAdmin layer knows what to do with the Linux OS, KVM etc etc. Similarly an engineer working on the Public Cloud may actually know the tips and tricks of 1 or 2 public clouds and not all 3. Kubernetes and container management is itself now an Abstraction Layer.

    This job advertisement not only lists multiple abstraction layers but even within them it lists multiple tools. For example within Virtualisation it lists KVM, ESXi and HyperV all 3 famous hypervisors and within Public Cloud it lists GCP, Azure and AWS all three. So not only does it span abstraction layers but even within abstraction layers it is asking for familiarity with multiple versions of software.

    In IT Operations knowing the right command or the right place to click sometimes matters a lot. Things dont proceed if you dont know the command or dont know where to click or what parameter to enter. Spanning Abstraction Layers and multiple tools within Abstraction Layers is a tricky job for IT Operations. I am guessing they will have a team and will manage the skillset of the team and not individual engineers. Multiple engineers with basic knowledge of the system and specific knowledge of 1 or 2 Abstraction Layers and 2 or 3 tools. The team level skill set management would be an important aspect here.

    The rest of the job description suggests this is an operations job as they required full work week availability and troubleshooting skills as well. So this new hybrid engineer will be tasked with on shift troubleshooting work supporting customers and speaking to vendors etc. It is important to note that this is not a Project Deployment or Professional Services job where you are reviewing designs, testing solutions, submitting BOMs, reviewing equiptment lists, counting item, installing systems and configuring systems from scratch. This is an Ops tshooting break-fix role. As such it requires a troubleshooting mindset and will require sufficient knowledge of the systems functions and the individual components to identify which part of the system is causing a bug or service impact. Once you identify which part is broken (eg networking or virtualisation) then you might need to dig a bit deeper and review some logs within that component to a certain level. Thereafter they will make an intelligent decision on either actions to fix the component or whom next to contact to fix the problem. Each individual component will have their own level 3 support structure and vendor and this Systems Reliability Engineer will identify whether networking is broken or virtualisation is broken or storage is broken etc etc. He will then attempt a certain level of fix and if not then consult the right team or vendor.

    As such when we look at the multiple skill sets required it looks very very complicated for one person to know all this. From my experience of 13 years IT still ongoing we are still in a siloed world where possibly a network engineer with a ccnp is progressing towards senior network engineer and CCIE or maybe only diversifieng with an AWS or Azure skill. A comprehensive non-siloed cross abstraction layer engineer with kubernetes, storage, public cloud, networking, virtualization, linux knowledge will probably be difficult to find because from what I see a lot of people are comfortable within their abstraction layer and such diversity is not necessary and is a big headache. Within networking which is my field I feel that network engineers are probably proceeding with deeper design knowledge or AWS/Azure diversification or Python Network Automation knowledge as a career path. Same might be true for say engineers within the Virtualization / Sysadmin layer who might be developing inside that abstraction layer. Further tricky is the part that you need this cross abstraction layer engineer to have ops and troubleshooting mindset willing to do shifts on weekends. There will be few people out there. Perhaps some incentives might be required to find the right diverse engineer working weekends. Incentives like permanent work from home or any nearby country accepted working the right timezone etc.

    These are the new Hybrid Engineers.

    Update: I later came to know that they have mentioned that they require 2 or 3 out of the skill set. So it appears they aee dividing skillset on a team level.

    Habib

    Privacy Engineering has become a new subject. Recent implementation of the GDPR law by Europe and similar laws following up elsewhere are affecting the internet. The internet is a beast which is difficult to control. There are forums and platforms on it where collaboration is fast and speedy which is difficult to dictate and control. A persons otherwise private information is stored on servers in distant places.

    Recent concerns include that of children and their online exposure. Children and youth become members at various platforms on the internet and their online activity is stored on the servers of the service they use. This means that their messages, writings, exchanges, search queries and any and all activity is present in some form on the internet. Whichever service they use will have networks spanning the globe and this data might well be present in a different country. This means that the childrens and youths data of one country is present and stored (and possibly analyzed) in another country. This will be objectionable to the elders of that country who might want to retain the data of their citizens within their country. To this end it appears that Data Centers might need to be established in various countries which enact laws to keep their citizens data in-country.

    One more concern is that large corporations of another country might have the data of your citizens. This means that they will have to comply to the laws of the country they belong to. This therefore means that any legal request for Data will have to be complied with and so your countries citizens data could well be legally shared to another country’s state agencies. Its a mixed bag of items. You have access to a free online service which shows you ads and provides free connectivity and free collaboration services but as a result it has your data.

    One thought process is ‘Who Cares’. As long as you are a normal peace loving citizen of any country who goes online to express ordinary acceptable views according to acceptable standards then who cares if your Data is here, there or elsewhere? Another thought process is ‘I do care’ and don’t want my Data to be present anywhere at all and don’t want my data to be accessible by any entity of any state.
    The fundamental question arises whether one considers the state as a friend or as a foe and whether one trusts the state or doesn’t trust the state. The fundamental question arises whether one consider their own state or the foreign state with their data trust-able or not. These are all valid questions and one persons view will be different from another persons view on this subject.

    Do you want to be blackmailed or maligned at any point in your life based on any of your past internet activity ?
    Do you want your state to be able to blackmail or malign you based on your internet activity?
    Do you want another country to be able to blackmail or malign you based on your internet activity?
    Is your online data from your past online activity such that if it goes public it could create problems for you ? Or is it such that you are afraid that if it goes in any enemies hands they could blackmail you ?
    Do you want your children to be unknowingly doing stuff online at a young age for which they could be maligned or blackmailed at a later date ?
    These are also valid questions and one persons view on this may vary from another persons view.

    From these questions arises the field of Privacy Engineering where the technical, societal and legal aspects of privacy are raised. These are indeed difficult but valid questions. If a person with ill intent gains access to your online data this could definitely affect your life.

    The internet is a battleground for control and a battleground for civilizations competing as well. One persons acceptable content is another persons evil and unacceptable content. What is evil content and what is good content is not agreed upon between different groups of humans. Therefore they compete on the content on the internet.

    The Internet also poses one of the most difficult situations existing on the planet for parents. Parents across the globe who are knowingly or unknowingly defenders of the human spirit in their children find it difficult to put their children on the internet. It is such a challenging exercise to preserve the human spirit of your child if their heart and mind gets exposed to evil inputs from the internet. Parents knowingly or unknowingly care for their childs human spirit and want to raise a good human being. This means that the eyes and ears of their children do not get exposed to evil content and evil inputs. The eyes and ears are direct paths to the mind and the heart and affect the hearts of the children and affect their spirit and affect the childs character. Access to the internet poses a grave risk and allows possible access to otherwise dangerous content.

    This poses a difficult situation where an attractable instrument exists in everyones pocket which exposes their childs eyes, ears, heart and mind to content which they consider evil, inappropriate and unacceptable. At a young age if the child starts browsing content on the internet and their immature mind gets exposed to information or content which they are not yet fit to absorb they then begin to think about things which perhaps they cant handle at that age. This is one of the most difficult situations on the planet for parents.

    This presents a field which could be named as Internet Morality Engineering. Parents I guess would be interested in Morality Engineering. (Morality: principles concerning the distinction between right and wrong or good and bad behavior)

    This is again a mixed bag of items where one mans morality is another mans immorality. Again the Internet here becomes a battleground for a clash of civilizations and a clash of values and a clash of cultures.

    It is indeed a sad situation at the moment in the worldwide community that the internet is affecting the children and the youth of the planet in negative ways. It comes as a mixed bag where one could use the internet to gain insight into any subject technical, scientific or social but it could negatively affect childrens thoughts, mindset and character as well.

    The internet and technology is possibly or definitely a cause of child and youth mental health problems. Exposure at a young immature age to mature content will most definitely lead to mental health problems. On this issue again the states of the world are inactive and the children and youth are getting exposed to everything on the net.

    At what age should a child have a smartphone with internet access ? This is one of the most difficult questions for parents. The Morality and Privacy concerns regarding the internets content put much stress on the parents of the world.

    It is an unfortunate state of affairs regarding the internet industry today that there aren’t much good controls present which would control childrens and youths internet exposure. States and governments don’t appear to be doing much to preserve their childrens or even adults minds, hearts, eyes and ears from evil, unacceptable content on the internet.


    There seems to be a total lack of understanding of ‘evil and unacceptable’ and its definition and there seems to be no state or government efforts underway to prevent exposure to evil things on the internet. This again becomes a case for Internet Morality Engineering which would debate on evil and evil content and goodness and good content. A contentious issue which would definitely be debated heavily but do you want to not debate it today and have the children of the world
    and children of your country or even your own kids to grow up with mental health problems ?

    The hearts, minds and the spirits of the children of the world are at stake and Internet Morality should be debated heavily. The Internets governing laws around the globe should address Goodness, Evil, Morality, Good Content, Bad Content, childrens exposure, child mental health and youth mental health problems. This should not just be left to private individuals and small groups. It must be addressed in government forums at a large scale. It is important for the children of the world and the parents of the world and the future of the internet and the future of humanity to have some control exercised on the internets content.

    The ears and the eyes are means of reaching the heart and the mind. It should be debated what the ears and the eyes of children, youth and even adults get exposed to on the internet. What is acceptable exposure and what is unacceptable exposure ? It is a difficult debate but is important.

    I would advocate that all the content on the internet must be ‘tagged’ like the content on the TV. Content on TV is tagged G (General), PG (Parental Guidance) and M (Mature) etc. and there are tags on content on streaming services, movies and games. I believe the same should be done for everything on the internet. There should be a G switch for the internet where it can be enabled on a smartphone and a browser and access becomes restricted to G rated Internet much like
    streaming services parental guidance mechanisms. This will require at source i.e. at server side programmatic enablement.

    In addition a home and the community is a controlled exposure by parents for their kids. I would also advocate that much like this there should be a service available on smartphones and all internet platforms where a parent controls exposure to only allowed online acquaintances for their kids.

    I would say that as there is IETF – Internet Engineering Task Force there should be an IMETF Internet Morality Engineering Task Force.

    This is very very important. Keep in mind that not every parent is fully educated or fully literate or fully tech savvy to control content at user end. There should be at source support for Internet Morality Decisions.

    Habib

    Shuffling large amounts of data around is the new thing. A cloud there, a cloud here, another third cloud there ; all connected and giving Multi Cloud. But will you ever move a large amount of Data off a cloud ? Or on a cloud ? Or between 2 clouds ?

    What is the Data Shuffling Cost between clouds ? Is there Data Shuffle Lock-in ?

    Imagine Serverless Lock-in and Data Shuffle Lock-in. Consider this with Data Gravity and Data Sovereignty.

    Consider Data creation location, Data transport mediums, Data handover location, Data processing location and results publishing location.

    Consider the Stream Processing nature of Data results. Systems handling data at large velocity.