Archive

Tag Archives: Abstraction Layers

I recently came across a system being designed to perform Objective-Driven AI. I had a fun time exploring this topic on the internet.

Objective-Driven AI is being defined as AI systems that “can learn, reason, plan”

Yann Lecun from Meta’s AI team has published these works on Objective-Driven AI:

I found out that the current Generative AI uses less causality and does more pattern matching. Causality refers to a cause-effect relationship. There are two terms important here: Causal Reasoning and Causal Inference. Causal ‘reasoning’ is a psychology term where one confirms the reasoning of a cause having a certain effect. Causal ‘inference’ is a mathematical and statistical methodology where the cause-and-effect relationship is established in maths.

And sure enough, the results of the research are out:

Question: “Can behavioral psychology be used as a foundation to provide the causal knowledge required here (to improve AI systems)?”

Answer by GPT4 : “Yes, behavioral psychology can indeed serve as a valuable foundation for providing the causal knowledge necessary for enhancing AI systems, particularly when it comes to understanding human behaviors and decisions. Integrating insights from behavioral psychology into AI involves translating patterns of human behavior into models that AI systems can learn from and predict.

What does this all mean:

It means that theoretically Generative AI can be improved and turned into Objective-Driven AI (“can learn, reason, plan”) by making more use of:

  • Statistics, specifically causal inference
  • Behavioural Psychology and all the library of research in this domain
  • Layering the above into Generative AI’s responses to give a causal reasoning-based output.

This is an open area of research. It stems from the frustration with Gen AI where it hallucinates and gives wrong outputs and can not plan. Generative AI is based on probability, statistics, and mathematical pattern matching (Note: This is done by NVIDIA GPUs mostly which is why Nvidia’s share price is skyrocketing).

Generative AI always comes with a note saying ‘can make mistakes’. The Generative AI tools are basically not made for people who might put too much faith in their answers and trust too much the outputs. Similarly, Generative AI can’t reason, learn and plan which is something humans can do very well. In no way is Generative AI in any competition with Human Intelligence for now. But the academia is working on closing the gaps. Objective-Driven AI is a step in that direction. The details around the Objective-Driven AI systems are given in the three links above.

Now lets bring into this perspective what Qualcomm is saying:

Qualcomm is in the smartphone hardware line of business and its CEO highlights:

  • Qualcomm CEO Envisions Smartphones as ​‘Pervasive’ AI-Powered Personal Assistants

Qualcomm says that AI will run on smartphones one day and will free up human time for doing creative tasks. The mundane tasks will be taken over by AI virtual assistants on smartphones.

Given this vision of pervasive AI and given Objective-Driven AI that is geared toward reasoning and planning, it can easily be concluded that AI will definitely improve human lives in the years to come.

Interesting times ahead.

Thank you for reading, Habib

Reference Link regarding Quallcomm: ” https://www.iqmetrix.com/blog/ces-2024-qualcomm-ceo-envisions-smartphones-as-pervasive-ai-powered-personal-assistants

A Digital Companion and Digital Twin for the Human Body to improve Healthcare

In today’s world computing power has increased a lot. Due to this, very large and deep simulation systems are possible. One such concept is that of a digital twin. Here is what ChatGPT describes as a Digital Twin: 

 “ A Digital Twin is a virtual model designed to accurately reflect a physical object, system, or process. It is used to simulate, analyze, and predict real-world behaviors without the risks or costs associated with physical testing. By integrating data from various sensors and other sources, a Digital Twin can provide valuable insights into performance, potential problems, or maintenance needs. This technology is widely used across various industries, such as manufacturing, healthcare, and urban planning, to optimize processes, improve products, and drive innovation. “ 

Problem Statement: 

Why is a large and complete digital twin of the human body needed ? 

It has been said time and time again that food is medicine. But, which food is good for you and when ? If someone has done work on a laptop for 2 hours then after that what food is better for them ? In the opposite scenario if someone has worked in the Gym for 2 hours then which food is good for them ? Add to this the statement that these people have various different habits too. Which food is good for which lifestyle and which food is good for which person at what time ? 

I propose a Digital Twin of the Human Body to be made which is so deep in its simulation that it can suggest, as a Digital Companion, the right food at the right time for a person. Or it can suggest the right exercise. Or it can suggest herbs, meditation or simply panadol. 

All this is possible only if a good digital twin level simulation structure is made for the human body. Such a complete simulation environment that can look at blood pressure, blood group, sugar levels, exoskeleton structure, exercise habits, hormone levels, existing medicines, pain levels, mental health conditions, eyesight, etc etc ‘together’ and then suggest therapy and healthcare options. 

A master plan for this is already proposed in a research paper: 

Human Body Digital Twin: A Master Plan

By: 

Chenyu Tang, Wentian Yi, Edoardo Occhipinti , Yanning Dai , Shuo Gao,, and Luigi G. Occhipinti

Link to Research Paper https://arxiv.org/pdf/2307.09225.pdf

This work will require a large scale project covering Physics, Chemistry, Biology, Medicine, Physiology, Exercise therapy, Nutrition, Pharmacy, Opthalmology etc etc. These siloes need to be broken and the various types of curing techniques also need to be brought together for better healthcare. 

The path to have a complete digital companion healthcare service lies through a human body Digital Twin level simulation. Thereafter, complex problems such as food as medicine can be solved. That is to understand the complexity of which food, when, for what person at what time is the right food ? Which stretching exercise is best for them out of two dozen stretches. Which herbal therapy or which type of tea combination is good for them and when. 

A step in this direction is the making of a Digital Twin of the Human Body. A complete and deep simulation. It’s prework is theoretical but deep and the simulation environment can be in the cloud. It’s prework can include deep knowledge graphs connecting how the various Systems of Systems in the human body are linked. 

Thank you,

Habib

In Computer Science the notion of Abstraction Layers is very common. A simple explanation of this is that Electronics is an abstraction layer below RAM and above RAM is the layer of a C++ memory read. In between the C++ Memory Read and RAM there are many layers upon layers of software and hardware structures.

It is observed that these abstraction sayers are not wholesomely tracked. A lack of tracking these is very sad because while we have at our fingertips the maps of the world down to the streets and houses but not a map of abstraction layers.

There are terms in different fields such as:

OSAL: Operating Systems Abstraction Layer.
HAL (software): Hardware Abstraction Layer in Unix.
DBAL: Database Abstraction Layer
NAL: Network Abstraction Layer

It’s clear that there are abstraction layers everywhere in computer science. In hardware, in software, in networks, in databases etc.

In the interest of the advancement of computer science I believe an Abstraction Layers Foundation should be established. A body which would analyse the breadth and depth of abstraction layers. A body which would analyse the interaction amongst abstraction layers.

I think that with the advancement of computing systems there exists the space for a new discipline called Complex Systems Abstraction Layer Engineering which should cover abstraction layers tracking and interactions.

I justify this concept based on the increase in interdisciplinary skill set requirement. For example it is observed that whereas earlier there was Front-End Engineer and Back-End Engineer there now exists an entity called Full Stack Engineer capable of both front end and back end. Similarly it has been observed that whereas earlier there were System Administrators and Network Administrators there is a push in the industry to break these isolated entities here too. Similar stuff must be going on with databases and storage.

It is observed very clearly that one of the distinguishing factors between an experienced engineer and a new one is the knowledge of abstraction layers of the existing system. It appears experience in IT goes hand in hand with identifying the interaction of the abstraction layer you are working in with other layers. For example when a software engineer does software development he writes a piece of software for one component in a larger picture. Over time this software engineer becomes an enterprise architect based on a thing called experience. I feel that much of this thing called experience is abstraction layer knowledge. With experience using time this software engineer identified the relationship between his code and his application and the larger system. Using time he gained knowledge of the multiple layers in the system and how they are interacting. This knowledge of multiple layers and their interactions and dependancies is then used by the later enterprise architect to formulate the architecture of a different system. This is experience but it is also abstraction layers scale knowledge.

I feel that one big distinguishing factor between an Architect and an Engineer is abstraction layers knowledge. The architect has knowledge of the different layers in the system and the interactions amongst them. The engineer doesn’t.

I feel that an Abstraction Layer Foundation would help to identify and track the multiple abstraction layers that exist. It would then help link systems scale knowledge.

We are heading towards extremely complex systems and systems upon systems in IT. I feel tracking the breadth of the multiple abstraction layers in the entire computing domain under one body with the goal of identifying relationships will help the advancement of computer science.

One thing that is extremely important is that this foundations work should be model based and not document based. There is a new field called Model Based Systems Engineering.

” Model-based systems engineering (MBSE), according to INCOSE, is the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.

It is a systems engineering methodology that focuses on creating and exploiting domain models as the primary means of information exchange, rather than on document-based information exchange.

MBSE methodology commonly used in industries such as aerospace, defense, rail, automotive, industrial, etc” Wikipedia

To track something complex documents are not used. Text documents have their limits. Software structure based models are being used to track complex systems under the field of Model Based Systems Engineering – MBSE. The same should be done by the Abstraction Layers Foundation for complex IT systems. To develop correlated linked models of IT systems.