What is a number? You normally don't ask that, nor do you answer it directly. Perhaps you answer indirectly: examples, how to use numbers, how to work with numbers, some properties (axioms and theorems) like how addition is commutative. And lastly, you learned numbers by practice.
Likewise with vector spaces. Remember those? Your best bet at teaching or learning them is again through examples, applications, properties, and ultimately practice.
The same approach is best for functor, applicative, and monad. Don't seek a direct answer to “what is a monad?”. Do look for and work out more examples, applications, properties. If you can write correct code, fix incorrect code, and predict the outcome of other people's code, you already understand more than you think, even if you can't write a sounds-good essay.
Practice. 40 hours a day.
This is the standard library map
function (e.g., map f [x,
y, z] = [f x, f y, f z]
) but I give it a different name:
fmap_List :: (a -> b) -> [] a -> [] b -- "[] a" means "[a]" in types. fmap_List f [] = [] fmap_List f (x:xs) = f x : fmap_List f xsin FAM.hs
Definition of the Maybe
type from the standard library:
data Maybe a = Nothing | Just a
Two perspectives:
There is a function for Maybe
analogous to fmap_List
:
fmap_Maybe :: (a -> b) -> Maybe a -> Maybe b fmap_Maybe f Nothing = Nothing fmap_Maybe f (Just a) = Just (f a)in FAM.hs
Definition of the Either
type from the standard library:
data Either e a = Left e | Right a
It's like Maybe
, but the “no answer” case carries extra data,
perhaps some kind of reason for why “no answer”.
And you guessed it, there is an analogous function for Either
:
fmap_Either :: (a -> b) -> (Either e) a -> (Either e) b fmap_Either f (Left e) = Left e fmap_Either f (Right a) = Right (f a)in FAM.hs
Also arrays, dictionaries, tree structures,… most data structures you can think of, and then some.
Common theme: (a -> b) -> (F a -> F b)
,
where F
is a parametrized type. There is a class for that!
class Functor f where fmap :: (a -> b) -> (f a -> f b)
A data structure example:
data BinTree a = BTNil | BTNode a (BinTree a) (BinTree a) deriving Show instance Functor BinTree where -- fmap :: (a -> b) -> BinTree a -> BinTree b fmap f BTNil = BTNil fmap f (BTNode a lt rt) = BTNode (f a) (fmap f lt) (fmap f rt)in FAM.hs
fmap
should satisfy these two axioms/laws:
-- 1. Functor identity fmap (\x -> x) xs = xs -- i.e., fmap (\x -> x) = \xs -> xs -- i.e., fmap id = id -- and to elaborate the types, since there are two "id"s: fmap (id :: a -> a) = (id :: f a -> f a) -- 2. fmap fusion, fmap is a homomorphism fmap g (fmap f xs) = fmap (\x -> g (f x)) xs -- i.e., fmap g . fmap f = fmap (g . f)
Discussion:
If you notice that, for data structures, fmap
seems to always
clone the shape and apply f elementwise to the elements, you're right. The
axioms are saying this very abstractly and indirectly. Example: If you tried to
be naughty and write fmap
for BinTree
to perform a
left-rotation at root, you would break both axioms. (Exercise: How?)
Functor on its own does not have much basic practical use, apart from providing a common name “fmap”. But it is much more useful when combined with Applicative and Monad methods below. It also has an advanced practical use (if you want this rabbit hole: “lens”).
On the other hand, Functor is extremely important in category theory (another rabbit hole!).
fmap
can apply a unary function elementwise to one list (or a
Maybe, or…).
What if you want to apply a 2-ary function to all pairs of elements of two lists (or two maybes, or…), like applying the 2-ary function to the cartesian product? I mean like this:
liftA2_List (-) [10,20,30] [1,2,3] = [10-1, 10-2, 10-3, 20-1, 20-2, 20-3, 30-1, 30-2, 30-3] = [9,8,7,19,18,17,29,28,27]
liftA2_List :: (a -> b -> c) -> [a] -> [b] -> [c] liftA2_List f [] _ = [] liftA2_List f (a:as) bs = map (f a) bs ++ liftA2_List f as bs liftA2_Maybe :: (a -> b -> c) -> Maybe a -> Maybe b -> Maybe c liftA2_Maybe f Nothing _ = Nothing liftA2_Maybe f (Just a) Nothing = Nothing liftA2_Maybe f (Just a) (Just b) = Just (f a b) -- The last two lines could be merged into: -- liftA2_Maybe f (Just a) mb = fmap (f a) mb liftA2_Either :: (a -> b -> c) -> Either e a -> Either e b -> Either e c liftA2_Either f (Left e) _ = Left e liftA2_Either f (Right a) (Left e) = Left e liftA2_Either f (Right a) (Right b) = Right (f a b) -- The last two lines could be merged into: -- liftA2_Either f (Right a) eb = fmap (f a) ebin FAM.hs
And what if you have a 3-ary function and three lists? Four?... Do you have to code them up separately? Or is there a generalization that covers them all?
Try this:
ap_List :: [a -> b] -> [a] -> [b] -- Example: -- ap_List [f,g] [1,2,3] -- = [f 1, f 2, f 3, g 1, g 2, g 3] -- = map f [1,2,3] ++ map g [1,2,3] ++ [] ap_List [] as = [] ap_List (f:fs) as = map f as ++ ap_List fs asin FAM.hs
ap_List
and liftA2_List
are equivalent—if you
have one, you can use it to implement the other:
ap_ListV2 fs as = liftA2_List (\f a -> f a) fs as liftA2_ListV2 f as bs = ap_List (fmap f as) bsin FAM.hs
One way to understand is to work out some examples. Left as an exercise.
Another way though is to work out the intermediate types:
ap_List (fmap f as) bs a->(b->c) [a] |<-- [b -> c] -->| [b] |<--- [c] --->|
The way you can use ap_List
to obtain liftA2_List
suggests how to obtain liftA3_List
for 3-ary function and three
lists, too. And so on.
liftA3_List :: (a -> b -> c -> d) -> [a] -> [b] -> [c] -> [d] liftA3_List f as bs cs = ap_List (ap_List (fmap f as) bs) csin FAM.hs
Again, exercise: work out some examples.
But here are the intermediate types:
ap_List (ap_List (fmap f as) bs) cs a->(b->c->d) [a] |<-- [b->(c->d)] -->| [b] |<--- [c->d] --->| [c] |<--- [d] --->|
Analogous stories for Maybe
and Either
.
(Unfortunately not BinTree
.)
There is a class for this too!
class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b -- you can call this "ap" liftA2 :: (a -> b -> c) -> f a -> f b -> f c -- And default implementations so you just have to provide pure -- and one of <*>, liftA2. -- And a couple of other methods with easy default implementations.
pure
for []
, Maybe
, and Either
e
work as follows:
-- [] version pure a = [a] -- Maybe version pure a = Just a -- Either e version pure a = Right a
pure
plays two roles:
The degenerate case when you have a 0-ary function and 0 lists, kind of.
2-ary, liftA2 :: (t1 -> t2 -> a) -> f t1 -> f t2 -> f a 1-ary, fmap :: (t1 -> a) > f t1 -> f a 0-ary, pure :: a -> f a
fmap
can be derived from pure
and <*>
:
fmap f xs = pure f <*> xs
The Applicative methods should satisfy the following axioms. The associativity axiom may be too abstract or terse, but I will show some corollaries below that hopefully helps.
-- 0. Applicative subsumes Functor fmap f xs = pure f <*> xs -- 1. Applicative left-identity pure id <*> xs = xs -- Compare with fmap identity! fmap id xs = xs -- 2. Applicative associativity, composition gs <*> (fs <*> xs) = ((pure (.) <*> gs) <*> fs) <*> xs = ((fmap (.) gs) <*> fs) <*> xs -- It may help to elaborate the types. Assume: -- xs :: f a -- fs :: f (a -> b) -- gs :: f (b -> c) -- (.) :: (b -> c) -> (a -> b) -> (a -> c) -- Try to determine the types of the subexpressions. -- 3. pure fusion, pure is a homomorphism pure f <*> pure x = pure (f x) fmap f (pure x) = pure (f x) -- 4. pure interchange, almost right-identity fs <*> pure x = pure (\f -> f x) <*> fs = fmap (\f -> f x) fs
The first corollary is that the Applicative axioms imply the Functor axioms
given fmap f xs = pure f <*> xs
. Functor identity is
immediate from Applicative left-identity. To deduce fmap fusion:
fmap g (fmap f xs) in Applicative terms = pure g <*> (pure f <*> xs) associativity = ((pure (.) <*> pure g) <*> pure f) <*> xs pure fusion = (pure ((.) g) <*> pure f) <*> xs pure fusion = pure ((.) g f) <*> xs infix notation = pure (g . f) <*> xs in Functor terms = fmap (g . f) xs
The second corollary is that
liftA3 (\x y z -> g x (f y z)) xs ys zs
can be done by the following two equivalent ways:
(fmap (\x y z -> g x (f y z)) xs <*> ys) <*> zs = fmap g xs <*> (fmap f ys <*> zs)
The first way you already know, and it associates to the left. The second way associates to the right. Intuitively, you have some kind of associativity law. In the case of lists, it says that the two cartesian products (R×S)×T and R×(S×T) are equivalent. The proof (6 steps of algebra grinding and 4 steps of lambda calculus perseverence) is omitted.
Associative laws justify re-grouping. This is important for many refactoring and implementing long chains by recursion.
I now want you to think of []
, Maybe
, Either
e
as types of effects or effectful programs, not always as types of data
structures.
“Effect” here is not a well-defined word, but if you know the phrase “function with a side-effect”, that's the rough idea. Although, I'll just say “effect” because, look, either it is a deliberate effect or it is a bug.
Haskell follows math in that, e.g., if f :: Int -> String
then f 4
should be the same string every time, without any effect
like failure, accessing state variables, performing I/O, etc.; likewise, if
y :: String
then it should be the same string every time and
without effects. However, f' :: Int -> F String
and y' ::
F String
are relieved of this obligation because of the
extra F
, in fact F
stands for what effects it can and
cannot have. (The new obligation of y'
is performing the same
F String
job/effect every time, but not necessarily giving the same
string.)
In this frame of mind, the Functor methods, the Applicative methods, and later the Monad methods are connectives—for combining basic effectful programs into composite, complex ones.
Maybe
: one answer or failure“foo :: Maybe Int
” now means a program that may succeed and
return an Int
(conveyed by Just
), or may fail
(conveyed by Nothing
).
(Suppose f :: Int -> String
.)
“fmap f foo :: Maybe String
” now means a program that
runs foo
, then converts the answer (if any) using f
.
“pure 4 :: Maybe Int
” now means a program that succeeds and
returns 4.
(Suppose bar :: Maybe Int
.)
“liftA2 (+) foo bar :: Maybe Int
” now means a composite program
that runs foo
to try to obtain a number; if success,
runs bar
likewise; if success too, the overall answer is the sum.
Example: I have a recipMay
function for reciprocals, but to
better handle division-by-zero, I use Maybe
to convey successes and
failures. I then use it to write an addRecip
function to add two
reciprocals, again involving Maybe
in anticipation of failure; my
first attempt is low-level:
recipMay :: Double -> Maybe Double recipMay a | a == 0 = Nothing | otherwise = Just (1 / a) -- or: pure (1 / a) addRecipV1 x y = case recipMay x of Nothing -> Nothing Just x_recip -> case recipMay y of Nothing -> Nothing Just y_recip -> Just (1/x_recip + 1/y_recip)in FAM.hs
My second attempt acknowledges the new way of thinking:
addRecipV2 :: Double -> Double -> Maybe Double addRecipV2 x y = liftA2 (+) (recipMay x) (recipMay y)in FAM.hs
Either e
is similar, plus the extra type e
(field
type of Left
) to carry the reason for failure.
[]
: multiple answers, non-determinism“foo :: [Int]
” now means a non-deterministic program that spawns
multiple universes for multiple answers. (Perhaps 0 universes—failure).
(Suppose f :: Int -> String
.)
“fmap f foo :: [String]
” now means a program that
runs foo
, then converts answers using f
.
“pure 4 :: [Int]
” now means a program that just has one
universe because there is one answer 4.
(Suppose bar :: [Int]
.)
“liftA2 (+) foo bar :: [Int]
” now means a composite program that
runs foo
, runs bar
, and sums every pair of answers.
If foo
has m answers and bar
has n answers, the
composite program has m×n answers.
Example: Every x>0 has two real square roots (non-determinism!). Every
x<0 has no real square roots (failure!). I write a function to add the
square roots of x and the square roots of y, using liftA2
to
acknowledge the new way of thinking:
sqrts :: Double -> [Double] sqrts a | a < 0 = [] | a == 0 = [0] -- or: pure 0 | otherwise = [sqrt a, - sqrt a] addSqrts :: Double -> Double -> [Double] addSqrts x y = liftA2 (+) (sqrts x) (sqrts y)in FAM.hs
Functor and Applicative methods help you compose smaller programs into bigger
ones, but they have a limitation: In liftA2 (+) foo bar
,
bar
's answer(s) or effect (e.g., how many answers) cannot depend on
foo
's answer(s).
What if you want to allow the second program to take a peek at an answer from the first program before deciding what to do? Like:
bind :: F a -> (a -> F b) -> F b
so “bind foo quaz
” means a composite program that runs
foo
, then its answer(s), if any, is passed to quaz
.
You may think of quaz
as a callback.
bind_Maybe :: Maybe a -> (a -> Maybe b) -> Maybe b bind_Maybe Nothing k = Nothing bind_Maybe (Just a) k = k a bind_Either :: Either e a -> (a -> Either e b) -> Either e b bind_Either (Left e) _ = Left e bind_Either (Right a) k = k a bind_List :: [a] -> (a -> [b]) -> [b] bind_List xs k = concat (map k xs)in FAM.hs
Example application: Compute all permutations of an input list.
-- All permutations of the input. permsV1 :: Eq a => [a] -> [[a]] permsV1 xs = permsAddV1 xs [] -- Helper: permsAdd xs ys = all permutations of xs prepended to ys -- E.g., -- permsAdd [1,2] [6,4,7] = [2:1:[6,4,7], 1:2:[6,4,7]] -- permsAdd [] [6,4,7] = [[6,4,7]] permsAddV1 :: Eq a => [a] -> [a] -> [[a]] permsAddV1 [] ys = [ys] permsAddV1 xs ys = bind_List xs (\x -> permsAddV1 (delete x xs) (x : ys)) -- For each x in xs, I want to delete x from xs, add x to ys, and recurse. -- OR: -- Non-deterministically choose x from xs, ... ditto.in FAM.hs
And there is a class for that too!
class Applicative f => Monad f where return :: a -> f a (>>=) :: f a -> (a -> f b) -> f b (>>) :: f a -> f b -> f b -- Default implementation: foo >> bar = foo >>= \_ -> bar -- Handy when you don't need foo's answer, only its effect.
“return
” means the same as “pure
”. It is here for
historical reasons (because Applicative is more recent). Do not think
in terms of control flow—“return
” does not exit anything.
More explanation for >>=
(you can say “bind”):
Suppose F
is a Monad instance, and
foo :: F X quaz :: X -> F Y foo >>= quaz :: F Y
Think of F
as an abstract type, so that there is no direct
way to ask “what's foo
's answer(s)?”.
But >>=
is tailor-made for F
and has access
to those answers.
Think of quaz
as a callback. >>=
will call
quaz
with an answer. So you don't ask “how to extract the
answer(s) from foo
?”. You provide a callback for receiving and
processing an answer. They call your callback whenever an answer is ready.
“Don't ask us. We'll call you.”
The permutation example using Monad methods:
permsV2 :: Eq a => [a] -> [[a]] permsV2 xs = permsAddV2 xs [] permsAddV2 :: Eq a => [a] -> [a] -> [[a]] permsAddV2 [] ys = return ys permsAddV2 xs ys = xs >>= \x -> permsAddV2 (delete x xs) (x : ys)in FAM.hs
The Monad methods satisfies these axioms:
-- 1L. Monad left identity return x >>= k = k x -- 1R. Monad right identity foo >>= \x -> return x = foo foo >>= return = foo -- 2. >>= associativity (foo >>= \x -> k1 x) >>= k2 = foo >>= (\x -> k1 x >>= k2) (foo >>= k1 ) >>= k2 = foo >>= (\x -> k1 x >>= k2) -- (x is a fresh variable, i.e., no name clash)
Associative laws justify re-grouping. This is important for many refactoring and implementing long chains by recursion.
Monad subsumes Applicative and Functor: From return
and >>=
we can get the methods of Applicative and Functor:
fmap f xs = xs >>= (\x -> return (f x)) liftA2 op xs ys = xs >>= (\x -> ys >>= (\y -> return (op x y)))
Monad is very important in theory, in both advanced mathematics and
programming language semantics. To make a math model for a programming
language, a Monad instance is carefully designed to represent the effects (e.g.,
if the language has exceptions, the Monad instance involves Either; if the
language has state variables, we will see how to do it below),
and >>=
represents sequential composition. We will do this
later in the course when we make toy interpreters for toy languages.
On the practical side:
Inspired by the theoretical use for semantics, mini special-purpose languages (“embedded domain-specific languages”) are made right within Haskell by designing custom-made types to represent the desired effects or features, and implementing their Monad instances. Users then write programs by combining primitives using Monad methods (or Functor, Applicative methods).
We will use this style for parsing later in the course.
Taking that one step further with polymorphism, we will do dependency injection, template method, mock tests below.
Other languages have learned from Haskell Monad for structuring data
queries (C# LINQ), concurrency (promises in Javascript and others),
generally any programming style based on chaining callbacks, which is
exactly what >>=
is about.
Or: how to fake state in functional programming / math.
Fairy tale:
I have a type “State s a
”. If foo :: State Int
String
for example, it means foo
gives a string answer, and
it has access to an int state variable. So “State Int
” is a type
that represents stateful effects with an Int
-type state.
A few basic operations are provided below. Furthemore, “State
s
” is an instance of Functor, Applicative, and Monad, so you have
connectives to chain up basic operations too.
-- "get" reads and returns the current value of the state variable. get :: State s s -- Code shown later. -- "put s1" sets the state variable to s1. It returns the 0-tuple because there -- is nothing to return. put :: s -> State s () -- Code shown later. -- Bridge from stateful fantasy to mathematical reality! "functionize prog s0" -- runs prog starting with initial state value s0 and gives you the final -- answer. Or, turns prog into a math pure function. functionize :: State s a -> s -> a -- Code shown later.
Example program I can build, and using functionize
to regain a
function to get results:
-- Loop through a list of numbers, add the numbers to the state variable. -- When done, return the latest value of the state variable. sumProg :: [Integer] -> State Integer Integer sumProg [] = get sumProg (x:xs) = get >>= \s -> put (s+x) >> sumProg xs statefulSum :: [Integer] -> Integer statefulSum xs = functionize (sumProg xs) 0in FAM.hs
How to make the fairy tale come true:
The trick is to have a state transition function instead, like s→s, and have
some starter function (funtionize
) that feeds it the initial value.
Except I also want it to give an answer. So actually s→(s,a): function from old-state to pair of new-state and answer.
data State s a = MkState (s -> (s, a)) -- Unwrap MkState. deState :: State s a -> s -> (s, a) deState (MkState stf) = stf functionize :: State s a -> s -> a functionize prog s0 = snd (deState prog s0) get :: State s s get = MkState (\s0 -> (s0, s0)) -- old state = s0, new state = old state = s0, answer s0 too. put :: s -> State s () put s = MkState (\s0 -> (s , ())) -- ignore old state, new state = s, answer the 0-tuple (). instance Functor (State s) where -- fmap :: (a -> b) -> State s a -> State s b fmap f (MkState stf) = MkState (\s0 -> -- Goal: Like stf but use f to convert a to b -- old state = s0, give to stf for new state s1 and answer a case stf s0 of (s1, a) -> -- overall new state is also s1, but change answer to f a (s1, f a)) testStateFunctor = deState (fmap length program) 10 where program :: State Integer String program = MkState (\s0 -> (s0+2, "hello")) -- should give (12, 5) instance Applicative (State s) where -- pure :: a -> State s a -- Goal: Give the answer a and try not to have an effect. -- "effect" for State means state change. pure a = MkState (\s0 -> (s0, a)) -- so new state = old state -- liftA2 :: (a -> b -> c) -> State s a -> State s b -> State s c -- -- State transition goal: -- overall old state -- --1st-program--> intermediate state -- --2nd-program--> overall new state -- -- (Why not the other order? Actually would be legitimate, but we usually -- desire liftA2's order to be consistent with >>='s order.) liftA2 op (MkState stf1) (MkState stf2) = MkState (\s0 -> -- overall old state = s0, give to stf1 case stf1 s0 of { (s1, a) -> -- intermediate state = s1, give to stf2 case stf2 s1 of { (s2, b) -> -- overall new state = s2 -- overall answer = op a b (s2, op a b) }} ) testStateApplicative = deState (liftA2 (:) prog1 prog2) 10 where prog1 :: State Integer Char prog1 = MkState (\s0 -> (s0+2, 'h')) prog2 :: State Integer String prog2 = MkState (\s0 -> (s0*2, "ello")) -- should give (24, "hello"). 24 = (10+2)*2. instance Monad (State s) where return = pure -- (>>=) :: State s a -> (a -> State s b) -> State s b -- Goal: -- 1. overall old state --1st-program--> (intermediate state, a) -- 2. give a and intermedate state to 2nd program. MkState stf1 >>= k = MkState (\s0 -> -- overall old state = s0, give to stf1 case stf1 s0 of { (s1, a) -> -- k is waiting for the answer a -- and also the intermediate state s1 -- technicality: "(k a) s1" is conceptually right but nominally a -- type error because (k a) :: State s b, not s -> (s, b) -- Ah but deState can unwrap! (Or use pattern matching.) deState (k a) s1 } )in FAM.hs
The State Monad per se may not be very useful because you could easily replace “state” by a parameter. But if you understand how it's done, you can combine this technique with others to make something useful, something that would be hairy otherwise. E.g, to obtain both state and failure, it's one of:
data F1 s a = MkF1 (s -> Maybe (s, a)) data F2 s a = MkF2 (s -> (s, Maybe a))
(They do have semantic difference! Exercise: What's the difference? When do you use which?)
File format checker for my toy file format: First three characters should be A, L, newline. Version 1, to be superceded.
toyCheckV1 :: IO Bool toyCheckV1 = getChar >>= \c1 -> getChar >>= \c2 -> getChar >>= \c3 -> return ([c1, c2, c3] == "AL\n")in FAM.hs
How do I test this? And/Or how do I restrict it to getChar
and
nothing funny behind my back? Answer: Dependency injection (or is it Template
method?) Several ways to do this in Haskell. Here's one: Define our own type
class for the relevant, permitted operations:
class Monad f => MonadToyCheck f where toyGetChar :: f Char -- Simplifying assumptions: Enough characters, no failure. A practical version -- should add methods for raising and catching EOF exceptions.in FAM.hs
The checker logic should be polymorphic in that type class:
toyCheckV2 :: MonadToyCheck f => f Bool toyCheckV2 = toyGetChar >>= \c1 -> toyGetChar >>= \c2 -> toyGetChar >>= \c3 -> return ([c1, c2, c3] == "AL\n")in FAM.hs
Only things toyCheckV2
can do: toyGetChar
, monad
methods, purely functional programming. Because user chooses
f
. And toyCheckV2
doesn't even know what it is. All
it knows is it can call toyGetChar
.
Now we can instantiate in two different ways, one way for production code, another way for mock testing.
For production code:
instance MonadToyCheck IO where toyGetChar = getChar realProgram :: IO Bool realProgram = toyCheck2in FAM.hs
Fake news for purely functional mock testing:
data Feeder a = MkFeeder (String -> (String, a)) -- Again, simplifying assumptions etc. But basically like the state monad, with -- the state being what's not yet consumed in the string. -- Unwrap MkFeeder. unFeeder :: Feeder a -> String -> (String, a) unFeeder (MkFeeder sf) = sf instance Monad Feeder where return a = MkFeeder (\s -> (s, a)) prog1 >>= k = MkFeeder (\s0 -> case unFeeder prog1 s0 of (s1, a) -> unFeeder (k a) s1) instance MonadToyCheck Feeder where -- toyGetChar :: Feeder Char toyGetChar = MkFeeder (\(c:cs) -> (cs, c)) instance Functor Feeder where fmap f p = p >>= \a -> return (f a) instance Applicative Feeder where pure a = MkFeeder (\s -> (s, a)) pf <*> pa = pf >>= \f -> pa >>= \a -> return (f a) testToyChecker2 :: String -> Bool testToyChecker2 str = snd (unFeeder toyCheckV2 str) toyTest1 = testToyChecker2 "ALhello" -- should be False toyTest2 = testToyChecker2 "AL\nhello" -- should be Truein FAM.hs