Correct sorting with Frama-C and some thoughts on Formal Methdos

12 02 2011

A couple of years ago, during my masters on Formal Methods I have been working with automatic provers and I also used Frama-C, this is a tool that allow the user to prove C code directly in the source code, using a special notation in the comments, called ACSL notation.

Frama-C allows you to make two kinds of proofs, security and safety ones. The safety ones are related with arrays index out of bounds access, and so. This kind of proofs are related to the language itself and they are easy to do if you use loop invariants, pre and post conditions.
If you use a high level language, like JAVA you won’t have almost none safety problems.
Because C is too close to machine level code, we can do things that we do not intend (or maybe we do and we use C exactly because it allows this kind of things). For example:

// foo.c file
#include <stdio.h>

int main() {
    char *a = "I like you";
    char *b = "I hate you";
    
    if(&a < &b) a = *(&a + 1); 
    else        a = *(&a - 1); 

    printf("%s\n", a); 
}

As you can see, I never used the b variable for nothing, just have declared it. And the result is:

[ulissesaraujocosta@maclisses:c]-$ gcc -o foo foo.c 
[ulissesaraujocosta@maclisses:c]-$ ./foo
I hate you

This lack of security of language C is one of the reasons we need to write safety statements. Of course this kind of things is why C is so fast and powerful, the person in charge is always the programmer. If you are interested in this kind of tricks and want to understand more about this and smashing the stack and so, feel free to read more posts in my blog about this subject.

The other kind of statements (security ones) are related to the functionality of the program and that’s basically where the problem or the effort is, I will talk about this later on. First let’s see the algorithm and the implementation in C.

Code

The algorithm I use here is just a simple example. I used bubble sort, this is a sort algorithm not very efficient, but it uses none more memory then the needed to store the structure you want to sort.
To get a visual understanding of the algorithm (and to see it inefficiency) check out this youtube video.

This is the implementation of the algorithm:

void swap(int *i, int *j) {
    int tmp = *i;
    *i = *j;
    *j = tmp;
}

void bubbleSort(int *vector, int tam) {
    int j, i;
    j = i = 0;
    for(i=0; i<tam; i++) {
		for(j=0; j<tam-i-1; j++) {
            g_swap = 0;
            if (vector[j] > vector[j+1]) {
                swap(&vector[j],&vector[j+1]);
            }
        }
    }
}

Pre, Post conditions and thinking formally

So, as you can see in the video (or in the code) the algorithm is pretty much simple, we pick the i element cross the array n times and for each element we compare with i, this n times.

We have as pre conditions: The size of the vector must be greater than zero, and all the positions in that vector exists, so in Frama-C we use the valid\_range(vector, i, j), where i and j are indexes of the vector to say that all elements exist.

tam > 0

valid\_range(vector,0,tam-1)

Ans as pos conditions we must ensure that the array is sorted ( I will talk predicate this later).
You may think that this by itself is enough to make a complete proof, but you are wrong. Image that my function clear all the elements in the array and fill the array with \{1,2,..,tam\}, our code will be proved and its wrong!

So, we need to say more… First thing that can pop to your head is OK, we will say that we have the same numbers in the beginning and in the end and you write this:
\forall_a : 0 \leq a < tam : (\exists_b : 0 \leq b < tam : old(vector(b)) \equiv vector(a))

In fact this is closer (not yet right), imagine that you give as input:
\{4,7,9,1,0,3,4\}. If your code returns \{0,1,3,4,7,9\} (we miss the repeated 4) the code will be proved.
So, the solution if to make a Permut predicate and prove for the multi set.
So, this are the post conditions:

sorted(vector,0,tam-1)

Permut\{Old,Here\}(vector,0,tam-1);

Frama-C is so cool because for example at the pos condition if we want to refer to the state in the beginning (before call the function) we use Old and if we want to refer to the moment after the call we heave the Here keyword, remember we are at the post condition, so this wil be executed in the end (so Here means the end of the function call).

Predicates

So, here is the Sorted predicate. Predicates receive a state L and the parameters (just like a function) and they return bool values (true or false). Inside we use regular ACSL notation. Here I define that for an array to be sorted each element must be less or equal to the next one.

/*@ predicate Sorted{L}(int a[], integer l, integer h) =
  @   \forall integer i; l <= i < h ==> a[i] <= a[i+1];
  @*/

The Permut is defined inductively, so we receive two states L1 and L2 and the array a and the range where we want to permute.
We write multiple rules for the permutation, reflection, symmetry, transitivity and finally the most important one, the Swap. So basically here we say that a permutation is a set of successive swaps.

/*@ inductive Permut{L1,L2}(int a[], integer l, integer h) {
  @  case Permut_refl{L}:
  @   \forall int a[], integer l, h; Permut{L,L}(a, l, h) ;
  @  case Permut_sym{L1,L2}:
  @    \forall int a[], integer l, h;
  @      Permut{L1,L2}(a, l, h) ==> Permut{L2,L1}(a, l, h) ;
  @  case Permut_trans{L1,L2,L3}:
  @    \forall int a[], integer l, h;
  @      Permut{L1,L2}(a, l, h) && Permut{L2,L3}(a, l, h) ==>
  @        Permut{L1,L3}(a, l, h) ;
  @  case Permut_swap{L1,L2}:
  @    \forall int a[], integer l, h, i, j;
  @       l <= i <= h && l <= j <= h && Swap{L1,L2}(a, i, j) ==>
  @     Permut{L1,L2}(a, l, h) ;
  @ }
  @
  @ predicate Swap{L1,L2}(int a[], integer i, integer j) =
  @      \at(a[i],L1) == \at(a[j],L2)
  @   && \at(a[j],L1) == \at(a[i],L2)
  @   && \forall integer k; k != i && k != j ==> \at(a[k],L1) == \at(a[k],L2);
  @*/

So, as you can see the bubble sort function itself have 18 lines of code, and in the end with the annotations for the proof we end with 90 lines, but we proved it!

Thoughts

My main point here is to show the thinking we need to have if we want to prove code in general. Pick what language you want, this is the easiest way you will have to prove software written in C. Sometimes if your functions are too complex you may need to prove it manually. The problem is not on the Frama-C side, Frama-C only generates the proof obligations to feed to automatic provers, like Yices, CVC3, Simplify, Z3, Alt-Ergo and so.

My point here is to show the cost of proving software. Proving software, specially if the language is too low level (like C – you need to care about a lot more things) is hard work and is not easy to a programmer without theoretical knowledge.
On the other side, you end up with a piece of software that is proved. Of course this proof is always requirements oriented, ny that I mean: if the requirements are wrong and the program is not doing what you expect the proof is along with that.
I do not stand to proof of all the code on the planet, but the proper utilization of FM (formal methods) tools for critical software.

I steel been using Frama-C since I learned it in 2009, nowadays I use it for small critical functions (because I want, I’m not encouraged to do so) and I have to say that the use of FM in the industry is far. As I told you Frama-C is the easiest automatic proof tool you will find at least that I know.

Talking with Marcelo Sousa about the use of FM in industry, we came to the conclusion that the people that are making this kind of tools and have the FM knowledge don’t make companies. I think if more brilliant people like John Launchbury make companies, definitely FM will be more used.

Source code

Here is all the code together if you want to test it:

// #include <stdio.h>

/*@ predicate Sorted{L}(int a[], integer l, integer h) =
  @   \forall integer i; l <= i < h ==> a[i] <= a[i+1];
  @
  @ predicate Swap{L1,L2}(int a[], integer i, integer j) =
  @      \at(a[i],L1) == \at(a[j],L2)
  @   && \at(a[j],L1) == \at(a[i],L2)
  @   && \forall integer k; k != i && k != j ==> \at(a[k],L1) == \at(a[k],L2);
  @*/

/*@ inductive Permut{L1,L2}(int a[], integer l, integer h) {
  @  case Permut_refl{L}:
  @   \forall int a[], integer l, h; Permut{L,L}(a, l, h) ;
  @  case Permut_sym{L1,L2}:
  @    \forall int a[], integer l, h;
  @      Permut{L1,L2}(a, l, h) ==> Permut{L2,L1}(a, l, h) ;
  @  case Permut_trans{L1,L2,L3}:
  @    \forall int a[], integer l, h;
  @      Permut{L1,L2}(a, l, h) && Permut{L2,L3}(a, l, h) ==>
  @        Permut{L1,L3}(a, l, h) ;
  @  case Permut_swap{L1,L2}:
  @    \forall int a[], integer l, h, i, j;
  @       l <= i <= h && l <= j <= h && Swap{L1,L2}(a, i, j) ==>
  @     Permut{L1,L2}(a, l, h) ;
  @ }
  @*/

/*@ requires \valid(i) && \valid(j);
  @ //assigns *i, *j; //BUG 0000080: Assertion failed in jc_interp_misc.ml
  @ ensures \at(*i,Old) == \at(*j,Here) && \at(*j,Old) == \at(*i,Here);
  @*/
void swap(int *i, int *j) {
    int tmp = *i;
    *i = *j;
    *j = tmp;
}

/*@ requires tam > 0;
  @ requires \valid_range(vector,0,tam-1);
  @ ensures Sorted{Here}(vector, 0, tam-1);
  @ ensures Permut{Old,Here}(vector,0,tam-1);
  @*/
void bubbleSort(int *vector, int tam) {
    int j, i;
    j = i = 0;
    //@ ghost int g_swap = 0;

  /*@ loop invariant 0 <= i < tam;
    @ loop invariant 0 <= g_swap <= 1;
    //last i+1 elements of sequence are sorted
    @ loop invariant Sorted{Here}(vector,tam-i-1,tam-1);
    //and are all greater or equal to the other elements of the sequence.
    @ loop invariant 0 < i < tam ==> \forall int a, b; 0 <= b <= tam-i-1 <= a < tam ==> vector[a] >= vector[b];
    @ loop invariant 0 < i < tam ==> Permut{Pre,Here}(vector,0,tam-1);
    @ loop variant tam-i;
    @*/
    for(i=0; i<tam; i++) {
      //@ ghost g_swap = 0;
      /*@ loop invariant 0 <= j < tam-i;
        @ loop invariant 0 <= g_swap <= 1;
        //The jth+1 element of sequence is greater or equal to the first j+1 elements of sequence.
        @ loop invariant 0 < j < tam-i ==> \forall int a; 0 <= a <= j ==> vector[a] <= vector[j+1];
        @ loop invariant 0 < j < tam-i ==> (g_swap == 1) ==> Permut{Pre,Here}(vector,0,tam-1);
        @ loop variant tam-i-j-1;
        @*/
		for(j=0; j<tam-i-1; j++) {
            g_swap = 0;
            if (vector[j] > vector[j+1]) {
                //@ ghost g_swap = 1;
                swap(&vector[j],&vector[j+1]);
            }
        }
    }
}

/*@ requires \true;
  @ ensures \result == 0;
  @*/
int main(int argc, char *argv[]) {
    int i;
    int v[9] = {8,5,2,6,9,3,0,4,1};

    bubbleSort(v,9);

//     for(i=0; i<9; i++)
//         printf("v[%d]=%d\n",i,v[i]);

    return 0;
}

If you are interested in the presentation me and pedro gave at our University, here it is:

Advertisements




Hylomorphisms in Haskell

9 04 2009

If you miss something in this post, I suggest you to start in Catamorphisms and Anamorphisms.

A Hylomorphism is just the composition of one catamorphism and then one anamorphism.
hylo~f~h~=~cata~f~\circ~ana~h, replacing that by the proper notation we have: [|f,h|]~=~(|f|)~\circ~[(h)]

In this post I will use the structure of a binary tree:

data BTree a = Empty | Node(a, (BTree a, BTree a))

I will use the tuples to don’t have to write uncurry’s. As I will show you, when we say that we are making a hylomorphism on a particular data type T, what we are trying to say is that the intermediate structure of our combination of catamorphism and anamorphism is that data type T. This is the structure throw our morphism will communicate with each other.

Anamorphism

So, here I will solve the Quicksort algorithm with a hylomorphism over BTree.

The intermediate structure being a BTree doesn’t mean that my function will receive BTree. My qSort function works over lists. So the first thing to do, is draw the respective anamorphism from [a] to BTree~a:

My strategy here is to do all the work in the anamorphism, so, I need a function h with type:
h : [a] \rightarrow 1 + a \times [a] \times [a], or in Haskell h :: [a] \rightarrow Either () (a, ([a], [a]))

That function is qsep:

qsep :: [a] -> Either () (a, ([a], [a]))
qsep []    = Left ()
qsep (h:t) = Right (h,(s,l))
    where (s,l) = part (<h) t

part:: (a -> Bool) -> [a] -> ([a], [a])
part p []                = ([],[])
part p (h:t) | p h       = let (s,l) = part p t in (h:s,l)
             | otherwise = let (s,l) = part p t in (s,h:l)

This code is very simple, in qsep I chose a pivotal element (first one), and filter the bigger to one side, and the other ones to the other, just like the algorithm. The function that do all that job is part, it process all the list finding the elements that satisfy the condition p, to put them in the left side of the tuple, and the others into the right side.

This function by it self don’t do almost anything, it is only a simple part of the algorithm.

Catamorphism

Next step is to see the diagram for catamorphisms from BTree~a to [a]:

As I said before, the heavy duty is on the side of the anamorphism, so here, the catamorphism will be very very simple. In fact it is.

inord :: Either a (a, ([a], [a])) -> [a]
inord = either (const []) join
    where join(x,(l,r))=l++[x]++r

That right! The only thing that the catamorphism do is a inorder passage over the structures a + a \times [a] \times [a], which is very simple, as as shown by the code.

Hylomorphism

The first thing is to draw the diagram, now for the hylomorphism, the composition of the cata with the ana:

Once having made the two most important parts of the function (the ana and cata), the hylo is very simple to do. You just have to make a function hyloBTree:

hyloBTree h g = cataBTree h . anaBTree g

And our function qSort bacame:

qSort :: Ord a => [a] -> [a]
qSort = hyloBTree inord qsep

And that’s it, now I’m going to show you the all code that you need to put all the things together and working.

inBTree :: Either () (b,(BTree b,BTree b)) -> BTree b
inBTree = either (const Empty) Node

outBTree :: BTree a -> Either () (a,(BTree a,BTree a))
outBTree Empty              = Left ()
outBTree (Node (a,(t1,t2))) = Right(a,(t1,t2))

baseBTree f g = id -|- (f >< g))

cataBTree g = g . (recBTree (cataBTree g)) . outBTree

anaBTree g = inBTree . (recBTree (anaBTree g) ) . g

hyloBTree h g = cataBTree h . anaBTree g

recBTree f = baseBTree id f

Outroduction

If you need more explanations feel free to contact me.





Anamorphisms in Haskell

8 04 2009

First I would like to introduce the notation that I use here. The pointfree notation is good to see a program (functions) data flow and as composition of functions, combination of functions, if you prefer. This style is characterized by not using variables in declaration of functions. Haskell allow us to implement that notation natively. The dual of the pointfree notation is the pointwise one.

A simple example of a function in pointwise style:

f n = (n+2)*10 -- pointwise

The dual in pointfree would be:

f = (*10) . (+2) -- pointfree

Clarifications

First of all to define a function, for example f, i can say:

, or .

I will assume that you are familiarized with infix notation, either, and composition (\circ) functions.

Types

For this post I need to explain the data type we will going to use. In Haskell we define it by:

data Tree a = Node a [Tree a]

Let’s create the same, but more convenient. Consider the following isomorphic type for Tree:

data Tree a = Node (a, [Tree a])

We could see Node as a the following function:

Node :: (a, [Tree a]) -> Tree a

So typologically we have (a, [Tree~a]). We use (\times) to define that two things occurs in parallel, like tuples do, so we can redefine it: (a \times~[Tree~a])

Now we can say that (Tree~a) is isomorphic to (a \times~[Tree~a]).
This is something to say that (Tree~a) and (a \times~[Tree~a]) keep the same information without any change. We represent that formally as: (Tree~a) \cong~(a \times~[Tree~a]).

Anamorphisms

Let A, B, C, D be Inductive data types (sets) and in, ana, rec functions.

ana(h_{Tree}) is the anamorphism of h if the diagram commute.

We use the notation rec_{Tree} to say that function rec in not generic, but only works for data Tree. The same happens with in and ana. We will write ana(h)_{Tree} using the composition of in, ana and rec functions. That way we are breaking our problem in small ones. So, in the end we will have the following definition for ana(h)_{Tree}:

ana(h)_{Tree} = in_{Tree} \circ rec_{Tree} \circ h

The function that we want is ana(h), and that function is over (Tree~a) so we have:

ana :: (A -> B) -> A -> Tree c

Type C is (Tree~c). Maybe this isn’t clear yet, let’s start with function in

function in

The function in_{Tree} is responsible to create the isomorphism between (Tree~a) and (a \times~[Tree~a]), so the code could be something like this:

inTree :: Tree a -> (a, [Tree a])
inTree    = Node

In Haskell we represent the type (\times) as (,). So, type D is (a \times~[Tree~a]). So by now, we already know the following unifications C \sim Tree~c and D \sim c \times~[Tree~c]. So now our graphic is:

function h

The function h is also known as *gen*, here is where we said the step that pattern do. This is the only function we need to take care, if this function is good, our problem is solved. Now image that our problem is:

Suppose that the pair of positive integers (v, p) denotes the number of red balls (v) and black (p) that is inside a bag, the balls which are taking randomly, successively, until the bag is empty.

This is the point-wise version of the function we want to convert to pointfree using anamorphisms. This function represent as a tree, all possible states of the bag over these experiences.

state :: (Int,Int) -> Tree (Int,Int)
state(0,0) = Node (0,0) []
state(v,0) = Node (v,0) [state(v-1,0)]
state(0,p) = Node (0,p) [state(0,p-1)]
state(v,p) = Node (v,p) [state(v-1,p),state(v,p-1)]

If we want that “latex state$ became an anamorphism, we have to say that our type A unify (\sim) with Int \times~Int, and Tree~c became more restrict, and unify with Tree (Int \times~Int). A consequence of changing the co-domain of in_{Tree} is changing the domain of it to (Int \times~Int) \times~[Tree (Int \times~Int)]. We represent ana(h) as [( h )]. Now we can be more specific with our graphic:

function rec

Here we have to get a function rec that co-domain is (Int \times~Int) \times~[Tree~(Int \times~Int)]. Probably the best is to pass the first part of the tuple (part with type (Int \times~Int)) and the rest (part with type [Tree~(Int \times~Int)]) is just a map of the function [(h)]_{Tree}. So, now our graphic is:

As you can see, the second part of the co-domain of h is the type of function map~[(h)]_{Tree}:

map~[(h)]_{Tree}~:~[(Int \times~Int)] \rightarrow~[Tree(Int \times~Int)]

So our final graphic became:

Now, we just have to define the function h and apply them to our anamorphism of Tree.

h :: (Int, Int) -> ( (Int, Int), [ (Int, Int) ] )
h(0,0) = ( (0,0), [] )
h(v,0) = ( (v,0), [ (v-1,0) ] )
h(0,p) = ( (0,p) [ (0,p-1) ] )
h(v,p) = ( (v,p), [ (v-1,p), (v,p-1) ] )

And this is it! Now we can say that:
state \equiv~ana_{Tree} where ana(h)_{Tree} =  in_{Tree} \circ~id~><~map~ana(h)_{Tree} \circ h

Outroduction

Here is all the code you need to run this example in Haskell:

module AnamorphismExample where

infix 5 ><

i1 = Left
i2 = Right
p1 = fst
p2 = snd

data Tree a = Node (a, [Tree a]) deriving Show

split :: (a -> b) -> (a -> c) -> a -> (b,c)
split f g x = (f x, g x)

(><) :: (a -> b) -> (c -> d) -> (a,c) -> (b,d)
f >< g = split (f . p1) (g . p2)

inTree :: (a, [Tree a]) -> Tree a
inTree = Node

anaTree h = inTree . (id >< map (anaTree h)) . h

-- our function
h_gen :: (Int, Int) -> ( (Int, Int), [ (Int, Int) ] )
h_gen(0,0) = ( (0,0), [] )
h_gen(v,0) = ( (v,0), [ (v-1,0) ] )
h_gen(0,p) = ( (0,p) , [ (0,p-1) ] )
h_gen(v,p) = ( (v,p), [ (v-1,p), (v,p-1) ] )

state = anaTree h_gen

Pass a year since I promised this post. The next will be on hylomorphisms I promise not take too that much.





Catamorphisms in Haskell

19 12 2007

Lately I have had many works to do at university, therefore I’m sorry for the non regularity of my post’s.
By the way, I also liked to announce that this is my first conscientious post at Planet Haskell.

Why Catamorphisms and Point-Free

Catamorphisms is the way that we can explain in one function how recursive patterns works in some data type.
Here I will use Point-Free notation because what matters here is to show the data flow and the composition of functions.
Point-Free style is used to think in data flow terms, and very useful to program verification, applying formalism to our code.

In point-free style of programming programs are expressed as combinations of simpler functions. This notation is known as write functions without their arguments. Pointwise is the normal form how we write a function.

Couple of examples of point-free notation:
1)

sum = foldr (+) 0 -- point-free
sum l = foldr (+) 0 l -- pointwise

2)

f = (*10).(+2) -- point-free
f n = (n+2)*10 -- pointwise

Clarifications

First of all to define a function, for example f, I say:

or
or

I will assume that you are familiarized with infix notation, const, either, uncurry and composition \circ function.

Types

In Haskell we have this definition for lists:

data [a] = [] | a : [a]

Let’s create the same, but more convenient. Consider the following isomorphic type for lists:

data List a = Empty | Node(a,List a) deriving Show

To represent [1,2,3] we wrote Node(1,Node(2,Node(3,Empty))).

As you can see, to construct a (List a) we have two options, Empty or Node. Formally we represent the constructor Empty as 1. And we use (+) to say that our two possibilities are 1 or Node. We could see Node as a the following function:

Node :: (a,List a) -> List a

So typologically we have 1 + (a,List~a). We use (\times) to define that two things occurs in parallel, like tuples do, so we can redefine it: 1 + (a \times~List~a)

Now we can say that (List~a) is isomorphic to (1 + a \times~List~a).
This is something to say that (List~a) and (1 + a \times~List~a) keep the same information without any change or any loss.

Catamorphisms as composition of functions

Let A, B, C, D be Inductive data types (sets) and out, cata, rec functions.

We will write cata(g)_{List} using the composition of out, cata, rec functions. That way we are breaking our problem in small ones. So, in the end we will have the following definition for cata(g)_{List}:

cata(g)_{List} = g \circ rec_{List} \circ out_{List}

The function that we want is cata(g), and that function is over (List~a) so we have:

cata :: (D -> C) -> List a -> C

Type A is (List~a). Maybe this isn’t clear yet, let’s start with function out

out

The function outList is responsible to create the isomorphism between (1 + a \times~List~a) and (List~a), so the code could be something like this:

outList :: List a -> Either () (a,List a)
outList Empty    = Left ()
outList (Node t) = Right t

In Haskell we represent the type 1 as (), (+) as Either and (\times) as (,).

So, type B is (1 + a \times~List~a).

function g

The function g is also known as *gen*, here is where we said the step that pattern do. Imagine that we want to insert all the values of (List~a) into [a]:

-- pointwise
g :: Either () (a,[a]) -> [a]
g (Left()) = []
g (Right(a,h)) = a:h

-- pointfree
g = either (const []) (uncurry (:))

We represent cata(g) as (| g |).
Now we can be more specific with our graphic:

rec

Here we have to get a function rec that transform 1 + (a \times~List~a) into 1 + (a \times~[a]). That function, general rec, will be:

recg f g h = f -|- (g ><  g) x = ((f . fst) x , (g . snd) x)

With that function we can say exactly what to do with type 1, a, and List~a in domain of rec.
So we want something like this:

rec g = recG id id g

like that we said that (1 + (a \times~\_)) will be the same in the counter domain (1 + (a \times~\_)) of rec. Now we need a function that receive a List~a and give us a [a]
Yes, that function is (| g |)! So, the final graphic became:

cata

Finally we gonna to define the function cata(g):

cata g = outList . rec (cata g) . g

where g is our *gen* function,

g = either (const []) (uncurry (:))

And our objective:

List2HaskellList = cata $ either (const []) (uncurry (:))

More catamorphisms

Imagine we have the following data type:

data NList a where
    Leaf  :: a -> NList a
    NNode :: a -> [NList a] -> NList a

out, rec, and cata became:

out (Leaf a) = Left a
out (NNode a l) = Right (a,l)

Using the previous definitions of (-|-) and (><)

rec f = id -|- (id >< map f)
cata g = g . rec (cata g) . out

Imagging that g has type:

g :: Either a (a,[[a]]) -> [a]

And the graphic for this cata became:

Conclusion

I’ve talked about cata’s without any formalism, the idea was to explain to someone who didn’t know.

I will talk more about catamorphisms and how to calculate programs with them.
In the future I will like to talk about anamorphisms too. And Later on I will talk about point-free over non recursive functions.