10 01 2008

## Intro

Like I promise I will talk about point-free over non recursive functions.
Point-free style is used to demonstrate propriety over programs easily than pointwise.
Because we didn’t have variables in functions, we can compose a lot of small functions with each other using some combinators.
Those combinators have some rules associate with them that tell us when to use some combinator.Also because I think that’s the easy way to learn point-free programming style. I will talk about that style over non recursive functions.

Because of the length of what I intend to explain I will talk about these theme in three more post’s. Consider this part I.

In the following parts I will talk about:

• Co-Products (Part II)
• Constants and conditionals (Part III)
• Pointfree – Pointwise conversion & Program calculation (Part IV)

At the time I finish the other posts I will put their link here.

## Part I

In this post I will talk about products of functions and some of their rules.

I will use $D_{\huge{E}}F$ that means ‘def’ from ‘definition’. I can’t use ‘def’ in this LaTeX package. And I confess that didn’t investigate a lot about that, in matter of fact I like the output of $D_{\huge{E}}F$.

I will also use Commutative diagrams to explain the rules, Because I think that is the easy way to get it.

## Combinators

The combinator $\noindent (\odot)$ is the fundamental for point-free style.

$\noindent \{D_{\huge{E}}F-\odot\}:$
$\noindent (\odot)~:~(b~\rightarrow~c)~\rightarrow~(a~\rightarrow~b)~\rightarrow~(a~\rightarrow~c)$
$(f~\odot~g)~x~=~f~(g~x)$

In fact $\noindent (\odot)$ is the fundamental combinator in Mathematical analysis.

Another very important function is the $\noindent id$ function, the identity.

$\noindent \{D_{\huge{E}}F-id\}:$
$\noindent id~:a~\rightarrow~a$
$id~x~=~x$

$\noindent (\odot)$ and $\noindent id$ respect the following laws:

$\noindent \{assoc-\odot\}:$
$\noindent (f \odot g) \odot h~=~f \odot (g \odot h)$

$\noindent \{nat-id\}:$
$\noindent f \odot id~=~f~=~id \odot f$

With just those laws we are ready to demonstrate programs, like:

Let A, B, C, D be Types and f :: A -> B, g :: B -> C and h :: C -> D

let’s prove that
$\noindent (h \odot g) \odot (id \odot (f \odot id)) = h \odot (g \odot f)$

To demonstrate this equality we pick one of the sides of the equation, and with $\noindent \{nat-id\}$ and $\noindent \{assoc-\odot\}$, we try to obtain the other side.

In this case, we can do:

$\noindent (h \odot g) \odot (id \odot (f \odot id)) = h \odot (g \odot f)$
$\Leftrightarrow \{assoc-\odot\}$
$((h \odot g) \odot id) \odot (f \odot id) = h \odot (g \odot f)$
$\Leftrightarrow \{nat-id\}$
$(h \odot g) \odot (f \odot id) = h \odot (g \odot f)$
$\Leftrightarrow \{assoc-\odot\}$
$((h \odot g) \odot f) \odot id = h \odot (g \odot f)$
$\Leftrightarrow \{nat-id\}$
$(h \odot g) \odot f = h \odot (g \odot f)$
$\Leftrightarrow \{assoc-\odot\}$
$h \odot (g \odot f) = h \odot (g \odot f)$
$\Leftrightarrow TRUE$

1) In fact as a convenience we can take off the brakets in a sequence os compositions. And so is pointless the use of $\noindent \{assoc-\odot\}$ law.
2) We can also use more than one rule per line.

So, the previous demonstation becames:

$\noindent (h \odot g) \odot (id \odot (f \odot id)) = h \odot (g \odot f)$
$\Leftrightarrow \{1,1\}$
$h \odot g \odot id \odot f \odot id = h \odot g \odot f$
$\Leftrightarrow \{nat-id,nat-id\}$
$h \odot g \odot f = h \odot g \odot f$
$\Leftrightarrow TRUE$

## split combinator

In any programming language we always have two general types to aggregate data, struct’s and unions.
Now we will see two combinators that work with those data types.

The $(\Delta)$ is often called ‘split’. This is the combinator responsible to take care of struct’s (tuples in Haskell).

It definition is:

$\noindent \{D_{\huge{E}}F-\Delta\}:$
$(\Delta)~:~(c~\rightarrow~a)~\rightarrow~(c~\rightarrow~b)~\rightarrow~(c~\rightarrow~a \times b)$
$(f~\Delta~g)~x~=~(f~x,g~x)$

split :: (c -> a) -> (c -> b) -> c -> (a,b)

split f g x = (f x, g x)

This combinator allow, as well, combine functions with the same domain.

With the functions ‘fst’ and ‘snd’ we can access the content of a tuple.

fst (a,b) = a

snd (a,b) = b

Let A, B, C, D be Types and f : A -> B, g : A -> C and we want the function split : A -> B x C

$\xymatrix @!=4pc {& A \ar[d]_{f \Delta g} \ar[dl]_{f} \ar[dr]_{g} & \\B & \ar[l]_{fst} B \times C\ar[r]_{snd} &C}$

Like diagram is commutative we can infer two rules:

$\noindent \{univ-\times\}:$
$split = f \Delta g \Leftrightarrow fst \odot split = f \wedge snd \odot split = g$

$\noindent \{cancel-\times\}:$
$fst \odot (f \Delta g) = f$
$snd \odot (f \Delta g) = g$

Now, if type A = B x C, we will have $f \Delta g : B \times C \rightarrow B \times C$

To have a commutative diagram $f = fst$ and $g = snd$

$\xymatrix @!=4pc {& B\times C \ar[d]_{fst \Delta snd} \ar[dl]_{fst} \ar[dr]_{snd} & \\B & \ar[l]_{fst} B \times C \ar[r]_{snd}& C}$
And we can infer the rule:
$\noindent \{reflex-\times\}:$
$fst \Delta snd = id_{B\times C}$

Let A, B, C, E be Types and h : A -> B, f : B -> C and g : B -> E.

$\xymatrix @!=4pc {& A \ar[ddl]_{f \odot h} \ar[d]_{h} \ar[ddr]_{g \odot h} & \\& \ar[dl]_{f} \ar[d]_{f\Delta g} B\ar[dr]_{g} & \\C & \ar[l]_{fst} D \ar[r]_{snd} & E\\}$

$\noindent \{fusion-\times\}:$
$(f \Delta g) \odot h = f \odot h \Delta g \odot h$

## Example – isomorphism demonstration

As example we can demonstrate isomorphisms.
Here is a brief description:

To prove the isomorphism $a \cong b$ we have to find two functions:
$f : a \rightarrow b$ and $g : b \rightarrow a$ such that $f \odot g = id \wedge g \odot f = id$

To do that we can use the diagrams or doscover the pointwise version and them converte to point-free, I will show later on the last techique.

Now we gonna use the diagrams to show the isomorphism $a \times (b \times c) \cong c \times (b \times a)$.

$\xymatrix @!=4pc {& a\times (b\times c) \ar[d]_{h} \ar[dl]_{snd \odot snd} \ar[dr]_{g} & \\c & \ar[l]_{fst} c\times(b\times a) \ar[r]_{snd} & b\times a}$

Discovering g:

$\xymatrix @!=4pc {& a\times (b\times c) \ar[d]_{g} \ar[dl]_{fst \odot snd} \ar[dr]_{fst} & \\b & \ar[l]_{fst} b\times a \ar[r]_{snd} & a}$

So, the definition of the function h is:
$h : a \times(b \times c) \rightarrow c \times (b \times a)$
$h = snd \odot snd \Delta (fst \odot snd \Delta fst)$

$\xymatrix @!=4pc {& c\times (b\times a) \ar[d]_{h^{\bullet}} \ar[dl]_{snd \odot snd} \ar[dr]_{g^{\bullet}} & \\a & \ar[l]_{fst} a \times(b \times c) \ar[r]_{snd} & b \times c}$
Discovering $g^{\bullet}$

$\xymatrix @!=4pc {& c\times (b\times a) \ar[d]_{g^{\bullet}} \ar[dl]_{fst \odot snd} \ar[dr]_{fst} & \\b & \ar[l]_{fst} b\times c \ar[r]_{snd} & c}$

So, the definition of the function $h^{\bullet}$ is:
$h^{\bullet} : c \times (b \times a) \rightarrow a \times(b \times c)$
$h^{\bullet} = snd \odot snd \Delta (fst \odot snd \Delta fst)$

And we conclude $h \odot h^{\bullet} = id = h^{\bullet} \odot h$, so we prove $a \times (b \times c) \cong c \times (b \times a)$

## $\bigotimes$ combinator

When we want to defin a function that takes a product and goes to a product we may use the function $(\bigotimes)$

$\noindent \{D_{\huge{E}}F-\bigotimes\}:$
$\noindent f \bigotimes g~=~f \odot fst \Delta g \odot snd$

infixr 5 > b) -> (c -> d) -> (a, c) -> (b, d)

(f >< g) x = split (f . fst) (g . snd)

here is the diagram of it definition:
$\xymatrix @!=4pc {a \ar[d]_{f} & \ar[l]_{fst} \ar[dl]_{f \odot fst} a\times b \ar[d]_{f \bigotimes g} \ar[dr]_{g \odot snd} \ar[r]_{snd}& b \ar[d]_{g}\\c & \ar[l]_{fst}c\times d \ar[r]_{snd}& d}$

The folowing diagram,
$\xymatrix @!=4pc {& \ar[dl]_{h} A \ar[d]_{h \Delta i} \ar[dr]_{i}&\\B \ar[d]_{f} & \ar[l]_{fst} \ar[d]_{f \bigotimes g} \ar[r]_{snd} C & \ar[d]_{g} D\\E & \ar[l]_{fst} F \ar[r]_{snd} & G}$

coud be simplified into this one:

$\xymatrix @!=4pc {& \ar[dl]_{f \odot h} A \ar[d]_{f \odot h \Delta g \odot i} \ar[dr]_{g \odot i}& \\E & \ar[l]_{fst} F \ar[r]_{snd} & G}$

And so, we find another rule:

$\noindent \{abssoc-\times\}:$
$(f \bigotimes g) \odot (h \Delta i) = f \odot h \Delta g \odot i$

The folowing diagram,
$\xymatrix @!=4pc {D \ar[d]_{h} & \ar[l]_{fst} A \ar[d]_{h \bigotimes i} \ar[r]_{snd} & G \ar[d]_{i}\\E \ar[d]_{f} & \ar[l]_{fst} B \ar[d]_{f \bigotimes g} \ar[r]_{snd} & H\ar[d]_{g}\\F & \ar[l]_{fst} C \ar[r]_{snd} & I}$ coud be simplified into this one:
$\xymatrix @!=4pc {D \ar[d]_{f \odot h} & \ar[l]_{fst} A \ar[d]_{f \odot h \bigotimes g \odot i} \ar[r]_{snd} & \ar[d]_{g \odot i} G\\F & \ar[l]_{fst} C \ar[r]_{snd} & I}And so, we find another rule$:

$\noindent \{functor-\times\}:$
$(f \bigotimes g) \odot (h \times i) = f \odot h \times g \odot i$

## Notes

The $\odot$ combinator have more precedence than the $\Delta$ combinator.
So, $f \odot h \Delta g \odot h = (f \odot h) \Delta (g \odot h)$

## Aknowledgements

I would like to thank dysfunctor to the corrections made to this post.

19 12 2007

Lately I have had many works to do at university, therefore I’m sorry for the non regularity of my post’s.
By the way, I also liked to announce that this is my first conscientious post at Planet Haskell.

## Why Catamorphisms and Point-Free

Catamorphisms is the way that we can explain in one function how recursive patterns works in some data type.
Here I will use Point-Free notation because what matters here is to show the data flow and the composition of functions.
Point-Free style is used to think in data flow terms, and very useful to program verification, applying formalism to our code.

In point-free style of programming programs are expressed as combinations of simpler functions. This notation is known as write functions without their arguments. Pointwise is the normal form how we write a function.

Couple of examples of point-free notation:
1)

sum = foldr (+) 0 -- point-free
sum l = foldr (+) 0 l -- pointwise

2)

f = (*10).(+2) -- point-free
f n = (n+2)*10 -- pointwise

## Clarifications

First of all to define a function, for example $f$, I say:

$\xymatrix {\mathbb{N} \ar[r]^{f} & \mathbb{R}}$ or
$\xymatrix {\mathbb{R} & \mathbb{N}\ar[l]^{f}}$ or
$\xymatrix {\mathbb{N} \ar[d]^{f} \\ \mathbb{R}}$

I will assume that you are familiarized with infix notation, $const$, $either$, $uncurry$ and composition $\circ$ function.

## Types

In Haskell we have this definition for lists:

data [a] = [] | a : [a]

Let’s create the same, but more convenient. Consider the following isomorphic type for lists:

data List a = Empty | Node(a,List a) deriving Show

To represent $[1,2,3]$ we wrote $Node(1,Node(2,Node(3,Empty)))$.

As you can see, to construct a $(List a)$ we have two options, $Empty$ or $Node$. Formally we represent the constructor $Empty$ as $1$. And we use $(+)$ to say that our two possibilities are $1$ or $Node$. We could see $Node$ as a the following function:

Node :: (a,List a) -> List a

So typologically we have $1 + (a,List~a)$. We use $(\times)$ to define that two things occurs in parallel, like tuples do, so we can redefine it: $1 + (a \times~List~a)$

Now we can say that $(List~a)$ is isomorphic to $(1 + a \times~List~a)$.
This is something to say that $(List~a)$ and $(1 + a \times~List~a)$ keep the same information without any change or any loss.

## Catamorphisms as composition of functions

Let $A$, $B$, $C$, $D$ be Inductive data types (sets) and $out$, $cata$, $rec$ functions.

$\xymatrix@!=4pc{A \ar[r]^{out_{List}} \ar[d]^{cata(g)_{List}} & B \ar[d]^{rec_{List}} \\C & D\ar[l]^{g}\\}$

We will write $cata(g)_{List}$ using the composition of $out$, $cata$, $rec$ functions. That way we are breaking our problem in small ones. So, in the end we will have the following definition for $cata(g)_{List}$:

$cata(g)_{List} = g \circ rec_{List} \circ out_{List}$

The function that we want is $cata(g)$, and that function is over $(List~a)$ so we have:

cata :: (D -> C) -> List a -> C

Type $A$ is $(List~a)$. Maybe this isn’t clear yet, let’s start with function $out$

### out

The function $outList$ is responsible to create the isomorphism between $(1 + a \times~List~a)$ and $(List~a)$, so the code could be something like this:

outList :: List a -> Either () (a,List a)
outList Empty    = Left ()
outList (Node t) = Right t

In Haskell we represent the type $1$ as $()$, $(+)$ as $Either$ and $(\times)$ as $(,)$.

So, type $B$ is $(1 + a \times~List~a)$.

### function $g$

The function $g$ is also known as *gen*, here is where we said the step that pattern do. Imagine that we want to insert all the values of $(List~a)$ into $[a]$:

-- pointwise
g :: Either () (a,[a]) -> [a]
g (Left()) = []
g (Right(a,h)) = a:h

-- pointfree
g = either (const []) (uncurry (:))

We represent $cata(g)$ as $(| g |)$.
Now we can be more specific with our graphic:

$\xymatrix@!=10pc{List~a \ar[r]^{out_{List}} \ar[d]^{(| g |)_{List}} & 1 + (a \times~ List~a) \ar[d]^{rec_{List}} \\[a] & 1 + (a \times~ [a])\ar[l]^{g}\\}$

## rec

Here we have to get a function $rec$ that transform $1 + (a \times~List~a)$ into $1 + (a \times~[a])$. That function, general rec, will be:

recg f g h = f -|- (g ><  g) x = ((f . fst) x , (g . snd) x)

With that function we can say exactly what to do with type $1$, $a$, and $List~a$ in domain of $rec$.
So we want something like this:

rec g = recG id id g

like that we said that $(1 + (a \times~\_))$ will be the same in the counter domain $(1 + (a \times~\_))$ of $rec$. Now we need a function that receive a $List~a$ and give us a $[a]$
Yes, that function is $(| g |)$! So, the final graphic became:

$\xymatrix@!=10pc{List~a \ar[r]^{out_{List}} \ar[d]^{(| g |)_{List}} & 1 + (a \times~List~a) \ar[d]^{id + id \times (| g |)_{List}} \\[a] & 1 + (a \times~[a])\ar[l]^{g}\\}$

## cata

Finally we gonna to define the function $cata(g)$:

cata g = outList . rec (cata g) . g

where $g$ is our *gen* function,

g = either (const []) (uncurry (:))

And our objective:

List2HaskellList = cata $either (const []) (uncurry (:)) ## More catamorphisms Imagine we have the following data type: data NList a where Leaf :: a -> NList a NNode :: a -> [NList a] -> NList a $out$, $rec$, and $cata$ became: out (Leaf a) = Left a out (NNode a l) = Right (a,l) Using the previous definitions of $(-|-)$ and $(><)$ rec f = id -|- (id >< map f) cata g = g . rec (cata g) . out Imagging that $g$ has type: g :: Either a (a,[[a]]) -> [a] And the graphic for this cata became: $\xymatrix@!=6pc{NList~a \ar[r]^{out_{NList}} \ar[d]^{(| g |)_{NList}} & a + (a \times~[NList~a]) \ar[d]^{id + id \times map~(| g |)_{NList}} \\[a] & a + (a \times~[[a]])\ar[l]^{g}\\}$ ## Conclusion I’ve talked about cata’s without any formalism, the idea was to explain to someone who didn’t know. I will talk more about catamorphisms and how to calculate programs with them. In the future I will like to talk about anamorphisms too. And Later on I will talk about point-free over non recursive functions. ## foldr the magic function 20 11 2007 ## Recursion In Haskell we essentially use recursion, this mean defining a function using itself definition. A recursive function is, for example the factorial: in haskell we define it like that: fact 0 = 1 fact n = n * fact (n-1) As you can see Haskell is authentic mathematic, pure functional language. My point here is to show you the magic of the function $foldr$ in Haskell, so I will assume that you already know Haskell, and maybe are accustomed to use $foldr$ function. ok… ## Seeing the magic The definition of the function $length$ could be something like this: length [] = 0 length (h:t) = 1 + length t The definition of the function $sum$ could be something like this: sum [] = 0 sum (h:t) = h + sum t And the definition of the function $mult$ could be something like this: mult [] = 1 mult (h:t) = h * mult t As you can see in all three definitions we have a pattern and we can generalize it: functionName [] = stopCase functionName (h:t) = constant operator (functionName t) We always need the stopping case of the function! And the same happens in other functions like $reverse$: reverse [] = [] reverse (h:t) = reverse t ++ [h] In this case we have: functionName [] = stopCase functionName (h:t) = (functionName t) operator constant So, we will always use the function applied to the tail of the list! If we use some of lambda notation we can see one more pattern, the real one: sum [] = 0 sum (h:t) = (a b -> a + b) h (sum t) reverse [] = [] reverse (h:t) = (a b -> b ++ [a]) h (reverse t) Our lambdas functions have arity 2, because those are the parts in which we can see the lists (head + tail). Now we can generalize even more, considering $function$ have arity 2: functionName [] = stopCase functionName (h:t) = function h (functionName t) well, that’s not right for Haskell. Because it doesn’t know what $function$ represents, that’s because the definition of $functionName$ doesn’t receive that argument. And the same happens with $stopCase$. functionName function stopCase [] = stopCase functionName function stopCase (h:t) = function h (functionName function stopCase t) Et voilá. In fact the last definition is the true one for $foldr$. Easy right? ## Functions with foldr’s Well… Now we can redefine previous functions just with foldr’s: length l = foldr (a b -> 1 + b) 0 l sum l = foldr (+) 0 l mult l = foldr (*) 1 l reverse l = foldr (a b -> b ++ [a]) [] l And all the functions over list’s… and l = foldr (&&) True l or l = foldr (||) False l ## More magic Foldr is more magical than this, it is one catamorphism! A Catamorphism over this data type: data [a] = [] | a : [a] Haskell List’s! That means that $foldr$ functions “knows” the recursive pattern over List’s. Foldr also allows you to use Pointfree notation, witch I like. Formal explanation. length = foldr (a b -> 1 + b) 0 ## More foldr’s We can also write foldr’s functions to every data type that we create, and “explain” how recursive patterns works for our new data type: data BTree a where Empty :: BTree a Node :: a -> BTree a -> BTree a -> BTree a foldrBTree :: (a -> b -> b -> b) -> b -> BTree a -> b foldrBTree op k Empty = k foldrBTree op k (Node a l r) = op a (foldrBTree op k l) (foldrBTree op k r) Now if we have a Binary Tree and want to know how many nodes it has, we just have to do this: countNodes = foldrBTree (\a nodesLeft nodesRight ->a+nodesLeft+nodesRight) 0 If we want to know how many Empty’s our tree has: countEmpty = foldrBTree (\_ nodesLeft nodesRight -> nodesLeft + nodesRight) 1 and so on… Fold’s are really a genuine help for your code. Later on I will talk more about catamorphisms… ## Parsing de expressões 30 09 2007 Vamos criar uma função que pega numa String, que tem representado ma expressão matemática, e passa para um determinado tipo de dados (Exp o). A String que iremos passar á função para que seja processada será qualquer coisa do género: "((1>=a) || (4==x))" e até "(2 -( ((1+3) * (4-x)) / 45))" Comecemos então pelo inicio; {-# OPTIONS -fglasgow-exts #-} module ParserExp where import Char Escolhemos o nosso tipo de dados para representar, primeiro, os operadores: data Ops = Add | Mul | Sim | Sub | Div | OR_ | AND_ | NOT_ | GE_ | GT_ | LE_ | LT_ | EQ_ | NE_ deriving Eq Acabamos de definir 14 operadores matemáticos; Adição, Multiplicação, Simétrico, Subtracção, Divisão, ou lógico, e lógico, negação, maior que, menor que, maior ou igual a, menor ou igual a, igual e diferente. Passemos agora por definir as nossas expressões como uma árvore n-ária: data Exp o = Const Int | Var String | Op o [Exp o] Agora podemos ver que o tipo de dados escolhido encaixa perfeitamente no que pretendemos, vejamos: "((1>=a) || (4==x))" pode ser representado comoOp OR_ [Op GE_ [Const 1, Var "a"], Op EQ_ [Const 4, Var "x"]] Temos que criar uma classe para sabermos a aridade de cada operador: class Opt o where arity :: o -> Int instance Opt Ops where arity Add = 2 arity Mul = 2 arity Sim = 1 arity Sub = 2 arity Div = 2 arity OR_ = 2 arity AND_ = 2 arity NOT_ = 1 arity GE_ = 2 arity GT_ = 2 arity LE_ = 2 arity LT_ = 2 arity EQ_ = 2 arity NE_ = 2 De seguida fazemos as instancias de Show para os nossos tipos de dados, ficamos assim com: instance Show Ops where show Add = "+" show Mul = "*" show Sim = "Sim" show Sub = "-" show Div = "/" show OR_ = "||" show AND_ = "&&" show NOT_ = "!" show GE_ = ">=" show GT_ = ">" show LE_ = "<=" show LT_ = " Show (Exp o) where show (Const n) = show n show (Var s) = s show (Op o l) | arity o == 2 = "(" ++ (show$ head l) ++ show o ++ (show $last l) ++ ")" | arity o == 1 = "(" ++ show o ++ (show$ head l) ++ ")"

Bom… vamos finalmente ao que nos interessa relamente, o nosso Parser!

data ReadS a = String -> [(a,String)]

Ou seja, é fácil perceber que este tipo terá que ser a assinatura da nossa função, pois só desta forma conseguimos processar uma String, originando (sempre!) uma lista de tuplos em que a primeira parte do tuplo tem o nosso tipo desejado (árvore n-ária) e a segunda parte tem o resto da String que ainda não foi processada:

De lembrar que o nosso tipo de dados é (Exp Ops) e não apenas Exp.
Sendo assim temos:

[((leOp "~" [a]),p4) |  ("(",p1)  <- lex s,
("~",p2) <- lex p1,
(")",p4)  <- lex p3] ++
[((leOp op [a,b]),p5) | ("(",p1)  <- lex s,
(op,p3)  <- lex p2,
op == "+"  || op == "*"  || op == "/"  ||
op == "-"  || op == "||" || op == "&&" ||
op == "==" || op == "!=" || op == "=" || op == "!"  || op == ">"  ||
op == "<",
(")",p5) <- lex p4 ] ++
[((Const ((read a)::Int)),sx) | (a,sx) <- lex s, all isDigit a] ++
[((Var a),sx)                 | (a,sx)  [Exp Ops] -> Exp Ops
leOp o = Op (read o::Ops)

Acreditem que o nosso parser resume-se a estas 21 linhas (com indentação exagerada).

Aqui temos que dar os créditos ás listas por compreensão e depois á função lex que vem no Prelude. É incrível o poder desta função.
Aconselho a verem mais sobre esta função.

E finalmente temos a instancia de Read para Exp Ops:

Apartir de agora sempre que quisermos fazer parsing a uma expressão utilizamos:

parsing :: String -> Exp Ops

Ao usar a função parsing asseguramo-nos de que iremos receber sempre uma Exp Ops que é o desejado.

Este parser é bastante simples e como tal não faz reconhecimento de erros ou más formações na String recebida.

É tudo!

código aqui

## Ficheiros com comentários

29 09 2007

No inicio em que andava a programar em Haskell, a fazer o smsRecroder (uma aplicação muito lame para gravar sms’s), recordo-me de ter precisado desesperadamente de uma função