## Pipes to Conduits part 3: Abort

Last time, we enhanced the await primitive, making it aware of when the upstream pipe returned a value. However, the change forced us to modify our style of programming. This is not necessarily a bad thing, but today, we’ll recover the old capabilities we had by adding a new primitive: abort. This will restore the ability for upstream pipes to shut down the pipeline.

> {-# LANGUAGE TypeOperators #-}
> {-# OPTIONS_GHC -Wall #-}
>
> module PipeAbort where
>
> import Control.Monad.Trans.Free (FreeT(..), FreeF(..), liftF, wrap)
> import Fun ((:&:)(..), (:|:)(..))
>
> import Data.Void (Void)


## Functors

We finally revisit our fourth old friend, the Empty functor, and give it the name Abort. Recall that the Empty functor allows us to short circuit computation without providing any other information.

> newtype Then next = Then next            -- Identity
> newtype Yield o next = Yield o           -- Const
> newtype Await i next = Await (i -> next) -- Fun
> data Abort next = Abort                  -- Empty

> instance Functor Then where
>   fmap f (Then next) = Then (f next)
>
> instance Functor (Yield o) where
>   fmap _f (Yield o) = Yield o
>
> instance Functor (Await i) where
>   fmap f (Await g) = Await (f . g)
>
> instance Functor Abort where
>   fmap _f Abort = Abort


## The Pipe type

> type YieldThen o = Yield o :&: Then
> type AwaitU i u = Await i :&: Await u


With our shiny new Abort functor in hand, we just union it in with the other options in a PipeF.

> type PipeF i o u = YieldThen o :|: AwaitU i u :|: Abort
> type Pipe i o u  = FreeT (PipeF i o u)
>
> type Producer o   = Pipe () o    ()
> type Consumer i u = Pipe i  Void u
> type Pipeline     = Pipe () Void ()


## Working with PipeF

I’ve defined :|: to be left-associative, which means that we can simply union another thing onto the right side, and wrap everything we used to have in a big L. This change is reflected in the lifting functions.

> liftYield :: YieldThen o next ->        PipeF i o u next
> liftYield = L . L
>
> liftAwait :: AwaitU i u next ->         PipeF i o u next
> liftAwait = L . R
>
> liftAbort :: Abort next ->              PipeF i o u next
> liftAbort = R
>
> yieldF :: o -> next ->                  PipeF i o u next
> yieldF o next = liftYield $Yield o :&: Then next > > awaitF :: (i -> next) -> (u -> next) -> PipeF i o u next > awaitF f g = liftAwait$ Await f :&: Await g
>
> abortF :: PipeF i o u next
> abortF = liftAbort Abort


I’ve added a smart constructor for Abort, which is entirely straightforward. We’ll need to add another branch to our pipeCase construct. pipeCase must be prepared with a default a, because Abort provides absolutely no information.

> pipeCase :: FreeF (PipeF i o u) r next
>          ->                                a  -- Abort
>          -> (r                          -> a) -- Return
>          -> (o -> next                  -> a) -- Yield
>          -> ((i -> next) -> (u -> next) -> a) -- Await
>                                         -> a
> pipeCase (Wrap (R Abort))
>   k _ _ _ = k
> pipeCase (Return r)
>   _ k _ _ = k r
> pipeCase (Wrap (L (L (Yield o :&: Then next))))
>   _ _ k _ = k o next
> pipeCase (Wrap (L (R (Await f :&: Await g))))
>   _ _ _ k = k f g


## Pipe primitives

> awaitE :: Monad m => Pipe i o u m (Either u i)
> awaitE  = liftF $awaitF Right Left > > yield :: Monad m => o -> Pipe i o u m () > yield b = liftF$ yieldF b ()
>
> abort :: Monad m => Pipe i o u m r
> abort = liftF abortF


Our primitives remain unchanged. We add the abort primitive; notice that it is polymorphic in its return type. In fact, it’s polymorphic in, well, everything. Its complete lack of information means that it can be used to fill any hole that has the shape of a Pipe.

## Pipe composition

The type of pipe composition does not change with this modification.

> (<+<) :: Monad m => Pipe i' o u' m r -> Pipe i i' u m u' -> Pipe i o u m r
> p1 <+< p2 = FreeT $do > x1 <- runFreeT p1 > let p1' = FreeT$ return x1
>   runFreeT $pipeCase x1  Everywhere we used pipeCase, we’ll need to add the extra branch for the Abort case. If the downstream pipe aborted, then everything upstream is discarded, as it is when downstream returns a value. > {- Abort -} (abort) -- upstream discarded > {- Return -} (\r -> return r) -- upstream discarded > {- Yield -} (\o next -> wrap$ yieldF o (next <+< p2))
>   {- Await  -} (\f1 g1  -> FreeT $do > x2 <- runFreeT p2 > runFreeT$ pipeCase x2


If the upstream pipe aborted, then downstream is forcibly aborted as well, meaning that the downstream pipe is discarded.

>     {- Abort  -} (abort)             -- downstream discarded


When the upstream pipe produces a result, we’ll give that result to the appropriate downstream handler. We used to then regurgitate the same result over and over to the downstream pipe every time it asked.

{- Return -} (\u' -> g1 u' <+< return u')

We’re going to change that behavior now. Instead, we will cause an abort if downstream ever awaits after receiving the upstream’s final result.

>     {- Return -} (\u'     -> g1 u' <+< abort) -- downstream gets one last shot


The rest remains as before.

>     {- Yield  -} (\o next -> f1 o  <+< next)
>     {- Await  -} (\f2 g2  -> wrap $awaitF (\i -> p1' <+< f2 i) > (\u -> p1' <+< g2 u)))  If idP is like multiplying by 1, then abort is like multiplying by 0. Sort of. As always, downstream drives, so if the upstream pipe is abort, but the downstream never consults upstream, then downstream can continue on its merry way for as long as it wants. $\displaystyle \forall p \in Pipe, abort \circ p \equiv abort$ $\displaystyle \forall p \in Producer, p \circ abort \equiv p$ Note that our current Producer type is not strong enough to actually guarantee this: it only restricts the input type to (), rather than preventing awaits altogether. > (>+>) :: Monad m => Pipe i i' u m u' -> Pipe i' o u' m r -> Pipe i o u m r > (>+>) = flip (<+<)  > infixr 9 <+< > infixr 9 >+>  ## Running a pipeline Now that a pipeline might abort at any time without a result, we need to adjust runPipe to take this possibility of failure into account. Instead of producing m r, we’ll produce a m (Maybe r). If the pipeline is aborted, Nothing is produced as the result. > runPipe :: Monad m => Pipeline m r -> m (Maybe r) > runPipe p = do > e <- runFreeT p > pipeCase e > {- Abort -} (return Nothing) > {- Return -} (\r -> return$ Just r)
>   {- Yield  -} (\_o next -> runPipe next)
>   {- Await  -} (\f _g    -> runPipe $f ())  ## Some basic pipes > fromList :: Monad m => [o] -> Producer o m () > fromList = mapM_ yield  We can still write the same pipes as before. awaitForever never asks for input after it gets the upstream result, so it will never be the source of an abort. > awaitForever :: Monad m => (i -> Pipe i o u m r) -> Pipe i o u m u > awaitForever f = go where > go = awaitE >>= \ex -> case ex of > Left u -> return u > Right i -> f i >> go > > pipe :: Monad m => (i -> o) -> Pipe i o u m u > pipe f = awaitForever$ yield . f
>
> idP :: Monad m => Pipe i i u m u
> idP = pipe id
>
> filterP :: Monad m => (i -> Bool) -> Pipe i i u m u
> filterP test = awaitForever $\x -> when (test x) (yield x) > > printer :: Show i => Consumer i u IO u > printer = awaitForever$ lift . print

> runP :: Monad m => Consumer i u m (u, [i])
> runP = awaitE >>= \ex -> case ex of
>   Left  u -> return (u, [])
>   Right i -> runP >>= \ ~(u, is) -> return (u, i:is)
>
> evalP :: Monad m => Consumer i u m u
> evalP = fst fmap runP
>
> execP :: Monad m => Consumer i u m [i]
> execP = snd fmap runP
>
> fold :: Monad m => (r -> i -> r) -> r -> Consumer i u m r
> fold f = go where
>   go r = awaitE >>= \ex -> case ex of
>     Left _u -> return r
>     Right i -> go $! f r i  ## Bringing back the good(?) stuff Now that we are equipped with both the abort and awaitE primitives, we can reproduce the good ol’ await that we had from before: > await :: Monad m => Pipe i o u m i > await = awaitE >>= \ex -> case ex of > Left _u -> abort > Right i -> return i  That means that we can resurrect the old style of pipe programming right alongside the new style: > oldPipe :: Monad m => (i -> o) -> Pipe i o u m r > oldPipe f = forever$ await >>= yield . f
>
> oldIdP :: Monad m => Pipe i i u m r
> oldIdP = oldPipe id
>
> oldFilterP :: Monad m => (i -> Bool) -> Pipe i i u m r
> oldFilterP test = forever $await >>= \x -> when (test x) (yield x) > > oldPrinter :: Show i => Consumer i u IO r > oldPrinter = forever$ await >>= lift . print


This code is identical to the code we had from part 1. Neat, huh? Notice how these versions of id, filter, etc, do not bear the restriction that $u = r$. However, they doesn’t behave exactly the same as before, because abort causes the pipeline to fail without any result.

ghci> runPipe $(printer >> return "not hijacked") <+< return "hijacked" Just "not hijacked" ghci> runPipe$ (oldPrinter >> return "not hijacked") <+< return "hijacked"
Nothing


## Next time

We’ve granted upstream pipes the power to abort downstream pipes that await on them, but is this too much power? What if downstream doesn’t want to go down? Next time, we’ll up the granularity of control once more by allowing downstream pipes to provide a handler for the case of an aborted upstream. Once we have that in place, we can start thinking about guaranteed finalizers.

Posted in Uncategorized | 1 Comment

## Pipes to Conduits part 2: Upstream Results

Last time, we reimplemented Control.Pipe, with basic await and yield functionality. However, in order to compose two pipes, their result types had to be the same, and whenever any pipe in a pipeline reached its return, it would bring down all the other pipes composed with it.

This time, we’ll modify the await primitive, forcing the user to deal with the possibility that the upstream pipe has completed and returned a value.

> {-# LANGUAGE TypeOperators #-}
> {-# OPTIONS_GHC -Wall #-}
>
> module PipeU where
>
> import Control.Monad.Trans.Free (FreeT(..), FreeF(..), liftF, wrap)
> import Fun ((:&:)(..), (:|:)(..))
>
> import Data.Void (Void)


## Functors

We’ll use all the same functors as before. You can compare this code with the code from last time to see exactly which changes have taken place. We’ll add one more convention, which is to use the type variable u to describe the return type of an upstream pipe.

> newtype Then next = Then next            -- Identity
> newtype Yield o next = Yield o           -- Const
> newtype Await i next = Await (i -> next) -- Fun

> instance Functor Then where
>   fmap f (Then next) = Then (f next)
>
> instance Functor (Yield o) where
>   fmap _f (Yield o) = Yield o
>
> instance Functor (Await i) where
>   fmap f (Await g) = Await (f . g)


## The Pipe type

We’re going to modify Await so that it also considers the possibility of a completed upstream pipe. That means that anytime you await, you could get an i or a u, and you need to be prepared to handle both situations.

> type YieldThen o = Yield o :&: Then
> type AwaitU i u = Await i :&: Await u


Because AwaitU demands a new type variable u, so must PipeF and Pipe.

> type PipeF i o u = YieldThen o :|: AwaitU i u
> type Pipe i o u  = FreeT (PipeF i o u)


We’ll add u to the Consumer type, because consumers must now be aware of the upstream result of the pipe they are composed with. We’ll use the trivial upstream result () for Producers and Pipelines, since they will never get one anyways.

> type Producer o   = Pipe () o    ()
> type Consumer i u = Pipe i  Void u
> type Pipeline     = Pipe () Void ()


Remember: Consumers are always as far downstream as possible, while Producers are always as far upstream as possible. A Pipeline is neither up nor down, since it is self-contained and therefore cannot be sensibly composed with any other pipes, except trivial ones such as idP.

## Working with PipeF

Our "lifting" helpers remain the same, except we must add the type variable u everywhere we have an AwaitU, PipeF, or Pipe. Notice how awaitF now has two inputs: the function to deal with a regular yielded value f :: i -> next, and the function to deal with a returned result g :: u -> next.

> liftYield :: YieldThen o next ->        PipeF i o u next
> liftYield = L
>
> liftAwait :: AwaitU i u next ->         PipeF i o u next
> liftAwait = R
>
> yieldF :: o -> next ->                  PipeF i o u next
> yieldF o next = liftYield $Yield o :&: Then next > > awaitF :: (i -> next) -> (u -> next) -> PipeF i o u next > awaitF f g = liftAwait$ Await f :&: Await g


We also update pipeCase to reflect the new function that is bundled with an AwaitU.

> pipeCase :: FreeF (PipeF i o u) r next
>          -> (r                          -> a) -- Return
>          -> (o -> next                  -> a) -- Yield
>          -> ((i -> next) -> (u -> next) -> a) -- Await
>                                         -> a
> pipeCase (Return r) k _ _ = k r
> pipeCase (Wrap (L (Yield o :&: Then next)))
>                     _ k _ = k o next
> pipeCase (Wrap (R (Await f :&: Await g)))
>                     _ _ k = k f g


Now stop; let’s have a little chat about awaitF. We expect the user to somehow indirectly supply a function g :: u -> Pipe i o u m r whenever they await, to handle the possibility that the upstream pipe has completed. But if that’s the case, wouldn’t it make more sense to shut off the input and upstream ends of the pipe afterwards? g :: u -> Pipe () o () m r, or using a synonym, g :: u -> Producer o m r. Well perhaps it would, but that would mean that the type of g does not fit into the pattern u -> next, which means we lose some amount of convenience whenever we deal with the await primitive.

For this blog series, I have chosen to proceed with not shutting off the input ends, to stay closer to Conduit behavior. The reason for this is simply convenience. If we didn’t do it this way, we wouldn’t be able to use x <- await monadic sugar any more. Not even Control.Frame forcibly closes the input end for you: you are expected to manually close the input end yourself after receiving the upstream termination signal or else experience automatic pipeline shutdown if you await again; this is presumably for the same sugar/convenience reasons.

In part 3 of this series, we will add the ability to behave like Frame in this regard: automatic shutdown after receiving the termination signal. Then in part 4 we will also provide the downstream pipe the opportunity to continue even after an upstream shutdown. Spoilers! For now, just forget I said all that. ;)

## Pipe primitives

We can no longer write await, because we have to deal with the possibility of a returned result. So we’ll have to settle with awaitE, which provides Either a u or an i.

> awaitE :: Monad m => Pipe i o u m (Either u i)
> awaitE  = liftF $awaitF Right Left  Notice, where we used to have id, we now have Right and Left. We need a next of type Either u i, so the first argument to awaitF must be i -> Either u i, and the second must be u -> Either u i. The types make the choice obvious. Yield remains the same: > yield :: Monad m => o -> Pipe i o u m () > yield b = liftF$ yieldF b ()


## Pipe composition

Time to tweak the way pipes are composed. We no longer want the upstream pipe to hijack the downstream one when it has a result. Let’s dive in and see how it pans out

> (<+<) :: Monad m => Pipe i' o u' m r -> Pipe i i' u m u' -> Pipe i o u m r


Notice how composition not only moves input/output pairs

(i => i') >+> (i' => o) = (i => o)

but also result types

(u => u') >+> (u' => r) = (u => r)

What does that mean? Well, for one thing, we can no longer simply rearrange the type variables and write a Category instance, unless we constrain u, u', and r to all be the same. Even then, will it still follow category laws? Or if we don’t constrain the upstream/result types, will it follow category-esque laws? (What does that even mean?) More on this later.

> p1 <+< p2 = FreeT $do > x1 <- runFreeT p1 > let p1' = FreeT$ return x1
>   runFreeT $pipeCase x1 > {- Return -} (\r -> return r) > {- Yield -} (\o next -> wrap$ yieldF o (next <+< p2))


Up until this point, everything is the same.

>   {- Await  -} (\f1 g1  -> FreeT $do > x2 <- runFreeT p2 > runFreeT$ pipeCase x2


The await case now has two functions at its disposal, the new one g1 is for handling the possibility that the upstream pipe has returned a value.

Now, here was my first impulse for handling the upstream return:

{- Return -} (\u' -> g1 u')

Simple, right? If you have an upstream result, then guess what, we have a function that is waiting for that input. But here’s the problem. g1 came from p1 :: Pipe i' o u' m r. That means that it has the type u' -> Pipe i' o u' m r, and therefore, when we apply a u', we get a Pipe i' o u' m r. Well that’s a problem, see, because our result type is supposed to be Pipe i o u m r: that’s i and u not i' and u'. So what do we do? Well, we could compose it with an exploding bomb to get the correct type:

{- Return -} (\u' -> g1 u' <+< error "kaboom!")

That’s not very nice. We could write in the docs that you should never await after you’ve gotten an upstream result, but using error like this is just gross.

How about something more sensible: just compose it with a pipe that will return the same upstream result all over again, in case you forgot what it was.

>     {- Return -} (\u'     -> g1 u' <+< return u')


Now we can safely say in the docs that once you get an upstream result, you will just keep getting that same result every time you await. That seems a lot less evil, though still a bit odd.

>     {- Yield  -} (\o next -> f1 o <+< next)
>     {- Await  -} (\f2 g2  -> wrap $awaitF (\i -> p1' <+< f2 i) > (\u -> p1' <+< g2 u)))  Yield looks the same, and to handle an upstream await, we just extend what we had before by mirroring the treatment of f2 to extend to g2 in like manner. Well there, we made it through again! Although once again, we get sort of a sour taste from the implementation we were forced to write. If only we had a way to deal even more explicitly with pipe termination… hrm… I’m feeling another blog post coming on… > (>+>) :: Monad m => Pipe i i' u m u' -> Pipe i' o u' m r -> Pipe i o u m r > (>+>) = flip (<+<)  > infixr 9 <+< > infixr 9 >+>  ## Running a pipeline The extra function g has no significance when "running" a pipeline, so we will just ignore it and retain essentially the same runPipe as before: > runPipe :: Monad m => Pipeline m r -> m r > runPipe p = do > e <- runFreeT p > pipeCase e > {- Return -} (\r -> return r) > {- Yield -} (\_o next -> runPipe next) > {- Await -} (\f _g -> runPipe$ f ())


## Some basic pipes

> fromList :: Monad m => [o] -> Producer o m ()
> fromList = mapM_ yield


Since we no longer have the same await, we’ll have to rethink the way that we write pipe code. We often used the idiom forever $await >>= foo, and it turns out that we can still simulate something like that: > awaitForever :: Monad m => (i -> Pipe i o u m r) -> Pipe i o u m u > awaitForever f = go where > go = awaitE >>= \ex -> case ex of > Left u -> return u > Right i -> f i >> go  Conduit users may recognize awaitForever: it loops and loops until the upstream pipe returns a result, and then it just passes that upstream result right along, therefore it has the same result type as upstream. We can write many of the functions we had before, but they will now bear the restriction of returning the upstream result type. > pipe :: Monad m => (i -> o) -> Pipe i o u m u > pipe f = awaitForever$ yield . f
>
> idP :: Monad m => Pipe i i u m u
> idP = pipe id
>
> filterP :: Monad m => (i -> Bool) -> Pipe i i u m u
> filterP test = awaitForever $\x -> when (test x) (yield x) > > printer :: Show i => Consumer i u IO u > printer = awaitForever$ lift . print


Due to this limitation, perhaps we want to provide a different result for a particular pipe.

> mapResult :: Monad m => (r -> r') -> Pipe i o u m r -> Pipe i o u m r'
> mapResult f p = do
>   r <- p
>   return (f r)
>
> overwriteResult :: Monad m => r' -> Pipe i o u m r -> Pipe i o u m r'
> overwriteResult r p = p >> return r


My mental hlint is going BEEP BEEP BEEP right now, because mapOutput is just fmap, and overwriteResult is just fmap . const.

## Newfound power

Now that our pipe composition works better with result types, we can write combinators that have nontrivial result types!

> runP :: Monad m => Consumer i u m (u, [i])
> runP = awaitE >>= \ex -> case ex of
>   Left  u -> return (u, [])
>   Right i -> runP >>= \ ~(u, is) -> return (u, i:is)
>
> evalP :: Monad m => Consumer i u m u
> evalP = fst fmap runP
>
> execP :: Monad m => Consumer i u m [i]
> execP = snd fmap runP
>
> fold :: Monad m => (r -> i -> r) -> r -> Consumer i u m r
> fold f = go where
>   go r = awaitE >>= \ex -> case ex of
>     Left _u -> return r
>     Right i -> go $! f r i  Play around in ghci and see for yourself: ghci> runPipe$ runP <+< (overwriteResult "foo" $fromList [1 .. 10]) ("foo",[1,2,3,4,5,6,7,8,9,10]) ghci> runPipe$ fold (+) 0 <+< fromList [10, 20, 100]
130

ghci> runPipe $(printer >> return "not hijacked") <+< return "hijacked" "not hijacked"  ## But… is it a Category? Consider what happens if we restrict the upstream and result types to be the same type. newtype PipeC m u i o = PipeC (Pipe i o u m u) instance Category (PipeC m u) where id :: PipeC m u i i -- id :: Pipe i i u m u id = PipeC idP (.) :: PipeC m u i' o -> PipeC m u i i' -> PipeC m u i o -- (.) :: Pipe i' o u m u -> Pipe i i' u m u -> Pipe i o u m u (PipeC p1) . (PipeC p2) = PipeC (p1 <+< p2) Notice how the idP we wrote already had that restriction! However, notice that this restriction throws away some of our "newfound power": we can no longer use runP or execP. This raises suspicion about whether evalP and fold are well-behaved. Well hold that thought for a second, and consider the following pseudo-haskell, where we provide a less restrictive Cateogry instance by "bundling" the input with the upstream type, and the output with the result type: newtype PipeC m (i,u) (o,r) = PipeC (Pipe i o u m r) instance Category (PipeC m) where id :: PipeC m (i,u) (i,u) -- id :: Pipe i i u m u id = PipeC idP (.) :: PipeC m (i',u') (o,r) -> PipeC m (i,u) (i',u') -> PipeC m (i,u) (o,r) -- (.) :: Pipe i' o u' m r -> Pipe i i' u m u' -> Pipe i o u m r (PipeC p1) . (PipeC p2) = PipeC (p1 <+< p2) Note that, again, idP bears the exact restriction given in the type. However, this time, (.) captures the full meaning of (<+<) without any superfluous restriction! But wait, we weren’t even sure if the restricted version was a category… how will we know if this is a category? Or… a category-like… thing, since we’re bending the rules of Haskell in the first place. On a huge tangent, "type bundling" in this manner would also allow us to express a Category instance for lens families as well. -- the types look backwards for LensFamily composition -- so we'll just swap them in the first place newtype LensC f (b,b') (a,a') = LensFamily f a a' b b' instance Category (LensC f) where id :: LensC f (a,a') (a,a') id = LensC id (.) :: LensC f (b,b') (c,c') -> LensC f (a,a') (b,b') -> LensC f (a,a') (c,c') (LensC l1) . (LensC l2) = LensC (l1 . l2) This really does work out soundly. See for yourself: (cabal install lens-family) ghci> :m +Lens.Family.Stock ghci> newtype LensC f b b' a a' = LensC (LensFamily f a a' b b') ghci> let (LensC l1) lcompose (LensC l2) = LensC (l1 . l2) ghci> :t lcompose lcompose :: LensC f t t1 a a' -> LensC f b b' t t1 -> LensC f b b' a a' ghci> :t LensC id LensC id :: LensC f a a' a a'  Well back to the point at hand: the answer is I don’t know. Do you? Perhaps sometime later I’ll add an addendum to this blog series with a deeper investigation of the Category laws, but for now, we’re just going to plow ahead, and not promise anything about whether or not our Pipe is still a Category. I am going to conjecture that it is, but nevertheless, buyer beware! I dare you to find a counterexample. ## Next time The new powers that our enhanced pipe composition give us are nice, but we had to give up the ability to not care, and instead we have to write code in a slightly different style. What’s more, upstream pipes now have no power over their downstream counterparts; once an upstream pipe returns, it is forever doomed to just keep returning that same result, which is sort of weird. Next time, we’ll explore a new primitive, abort, and restore the ability for any pipe to abort the entire pipeline. abort :: Monad m => Pipe i o u m r You can play with this code for yourself by downloading PipeU.lhs. Posted in Uncategorized | Leave a comment ## Pipes to Conduits part 1: Yield and Await Last time we quickly reviewed several basic Functors in Haskell, and various ways to combine them. Today, we will put these functors to good use, and rewrite Control.Pipe (not that it needs rewriting; we’re just doing this for fun). > {-# LANGUAGE TypeOperators #-} > {-# OPTIONS_GHC -Wall #-} > > module Pipe where > > import Control.Monad.Trans.Free (FreeT(..), FreeF(..), liftF, wrap) > import Fun ((:&:)(..), (:|:)(..)) > > import Data.Void (Void) > import Control.Monad (forever, when) > import Control.Monad.Trans.Class (lift)  ## Functors I’m going to give new names to three old friends. These new names will be more convenient and helpful when dealing with pipe-related concepts. I’ll use a few conventions throughout this code: • A pipe’s input is referred to by the type variable i • A pipe’s output is referred to by the type variable o • Monads, as usual, are referred to by the type variable m • A pipe’s return type is referred to as r • The final type variable of a functor will usually be called next > newtype Then next = Then next -- Identity > newtype Yield o next = Yield o -- Const > newtype Await i next = Await (i -> next) -- Fun  Then embodies knowledge of what to do next, while Await represents the need of an i in order to determine what’s next. Yield provides an o, which is presumably the i that someone else is awaiting. The Functor instances are the same as in the last post. > instance Functor Then where > fmap f (Then next) = Then (f next) > > instance Functor (Yield o) where > fmap _f (Yield o) = Yield o > > instance Functor (Await i) where > fmap f (Await g) = Await (f . g)  ## The Pipe type For our pipe primitive yield, we want to be able to continue computation afterwards, so we will bundle Yield o with Then to accomplish this. > type YieldThen o = Yield o :&: Then  At its heart, a Pipe can either Yield(Then) or Await. We can encode this directly with the :|: functor combiner. > type PipeF i o = YieldThen o :|: Await i  Now we have assembled our functor, let’s create a Free Monad out of it. > type Pipe i o = FreeT (PipeF i o)  This type is intended to work just like Pipe from Control.Pipe. It has four type parameters: Pipe i o m r (the final two are implied by the partial application of FreeT). We’ll also provide the same convenience synonyms as Control.Pipe (again, with implied type parameters m and r): > type Producer o = Pipe () o > type Consumer i = Pipe i Void > type Pipeline = Pipe () Void  ## Working with PipeF Unfortunately, FreeT introduces some extra layers of cruft that we have to work through. Our functor-based approach using (:|:) and (:&:) introduces even more cruft. Fear not, it is all very straightforward, and it is an entirely mechanical process to deal with the cruft. First, let’s define some lifting functions and smart constructors to help us put the right puzzle pieces in the right places: > liftYield :: YieldThen o next -> PipeF i o next > liftYield = L > > liftAwait :: Await i next -> PipeF i o next > liftAwait = R > > yieldF :: o -> next -> PipeF i o next > yieldF o next = liftYield$ Yield o :&: Then next
>
> awaitF :: (i -> next) ->         PipeF i o next
> awaitF f = liftAwait $Await f  Now, to cut down on pattern-matching cruft, we’ll make a case assessment function. First, consider the FreeT type: newtype FreeT f m r = FreeT { runFreeT :: m (FreeF f r (FreeT f m r)) } Ugh! It looks daunting, but it’s really quite straightforward. First, it is wrapped in a monad, m next. Second, it is wrapped in a FreeF f r next, which can be either Return r or Wrap (f next). Finally, next is another FreeT f m r all over again. Because we will, to some extent, be mimicking Control.Pipe code, we will be performing case analysis at the level of FreeF. > pipeCase :: FreeF (PipeF i o) r next > -> (r -> a) -- Return > -> (o -> next -> a) -- Yield > -> ((i -> next) -> a) -- Await > -> a > pipeCase (Return r) k _ _ = k r > pipeCase (Wrap (L (Yield o :&: Then next))) > _ k _ = k o next > pipeCase (Wrap (R (Await f))) > _ _ k = k f  ## Pipe primitives We already created smart constructors awaitF and yieldF, which take the appropriate arguments, and plug them into the correct slots to create a PipeF. By making use of these and liftF, writing the pipe primitives await and yield is trivial. > await :: Monad m => Pipe i o m i > await = liftF$ awaitF id

> yield :: Monad m => o -> Pipe i o m ()
> yield b = liftF $yieldF b ()  The trick to using liftF is you simply provide whatever argument to the constructor that makes sense. For await, we need to provide a function i -> next, such that next is the result type i. The obvious function i -> i is id. For yield, we need to come up with something such that next is the result type (), so () is what we plug in. liftF won’t always suit our needs, but once you grasp the little intuition, it is quite elegant for the places where it does work. ## Pipe composition The fundamental thing that you do with pipes is you connect them. By rearranging the type variables, you can make a Category, but I won’t bother doing that here. Pipe composition is driven by the downstream pipe. The arrows point downstream, so p1 <+< p2 means that p1 is downstream of p2, and p2 is upstream of p1. > (<+<) :: Monad m => Pipe i' o m r -> Pipe i i' m r -> Pipe i o m r > p1 <+< p2 = FreeT$ do
>   x1 <- runFreeT p1
>   let p1' = FreeT $return x1 > runFreeT$ pipeCase x1


We begin by running the downstream monadic action (recall that FreeT is just a newtype around m (FreeF ...)) and performing case analysis on the resultant FreeF.

>   {- Return -} (\r      -> return r)
>   {- Yield  -} (\o next -> wrap $yieldF o (next <+< p2))  If the downstream pipe is a Return, then we discard the upstream pipe and return the result. (The expressions on the right-hand side of each -> are Pipes, so when we say return r, we are saying "create the pipe that trivially returns r".) If the downstream pipe is yielding, then we create a yield action, and suspend the composition of whatever comes next after the yield with the upstream pipe. Remember, yield transfers control downstream, so if the downstream of two composed pipes is yielding, then they are both giving up control to a pipe farther down the line. Also recall that wrap will take a PipeF and make a Pipe out of it, and that yieldF makes a PipeF. > {- Await -} (\f1 -> FreeT$ do
>     x2 <- runFreeT p2
>     runFreeT $pipeCase x2  If the downstream pipe is Awaiting, then control transfers upstream. We perform the same trick of extracting and casing on the upstream pipe. > {- Return -} (\r -> return r) > {- Yield -} (\o next -> f1 o <+< next) > {- Await -} (\f2 -> wrap$ awaitF (\i -> p1' <+< f2 i)))


If the upstream pipe is a Return, then we discard the downstream pipe and return the result. That’s right: whichever pipe returns first wins, and shuts down anyone that is composed with it as soon as they give it control.

If the upstream pipe is yielding, then great! We’re in the branch where we happen to know that downstream is Awaiting, so just pass the yielded information along, and compose the new downstream pipe with the "next" part of the upstream one.

If the upstream pipe is awaiting, well, then both upstream and downstream are awaiting, so we transfer control further upstream by combining the two into a single await construct, deferring their composition until an up-upstream value is available.

And that’s it! That wasn’t so hard, was it?

> (>+>) :: Monad m => Pipe i i' m r -> Pipe i' o m r -> Pipe i o m r
> (>+>) = flip (<+<)

> infixr 9 <+<
> infixr 9 >+>


## Running a pipeline

Running a pipeline is very straightforward. We simply provide an endless supply of () fuel, and keep cranking the pipeline until it returns something. We ignore anything it yields; its type is constrained to Void so it shouldn’t be yielding anything in the first place.

> runPipe :: Monad m => Pipeline m r -> m r
> runPipe p = do
>   e <- runFreeT p
>   pipeCase e
>   {- Return -} (\r       -> return r)
>   {- Yield  -} (\_o next -> runPipe next)
>   {- Await  -} (\f       -> runPipe $f ())  Note that runPipe and <+< could be considered two different "interpreters" for pipes. The Free monad just gives us a convenient way to assemble the puzzle pieces, but it is up to us to give the final result meaning by interpreting it. Conduit’s "connect and resume" operator could be considered yet another "interpreter". ## Some basic pipes Here’s just a few pipes to play with. Fire up ghci and make sure they work as expected. > fromList :: Monad m => [o] -> Producer o m () > fromList = mapM_ yield  Pay attention to how the following are implemented, because next time we’re going to have to change them. > pipe :: Monad m => (i -> o) -> Pipe i o m r > pipe f = forever$ await >>= yield . f
>
> idP :: Monad m => Pipe i i m r
> idP = pipe id
>
> filterP :: Monad m => (i -> Bool) -> Pipe i i m r
> filterP test = forever $await >>= \x -> when (test x) (yield x) > > printer :: Show i => Consumer i IO r > printer = forever$ await >>= lift . print


Testing…

ghci> runPipe $printer <+< pipe (+1) <+< filterP even <+< fromList [1 .. 5] 3 5 ghci> runPipe$ idP <+< idP <+< return "Hello, pipes" <+< idP <+< idP
"Hello, pipes"

ghci> runPipe $return "Downstream drives" <+< return "Upstream doesn't" "Downstream drives" ghci> runPipe$ (printer >> return "not hijacked") <+< return "hijacked"
"hijacked"


## Next time

Await and yield are great, but the greedy return shutdown behavior is somewhat disturbing. Next time, we’ll tweak the Pipe type, giving it an "upstream result" type parameter. With that, result types will be composed, too, and that way an upstream pipe won’t be able to hijack a downstream pipe!

type PipeF i o u = ???
type Pipe i o u = FreeT (PipeF i o u)
(<+<) :: Pipe i' o u' m r -> Pipe i i' u m u' -> Pipe i o u m r

Play around with this code by downloading Pipe.lhs. (You’ll need Fun.lhs from last time in the same directory).

## Pipes to Conduits part 0: Combining Functors

A Functor in Haskell is a type of kind f :: * -> *, which supports the operation fmap :: (a -> b) -> f a -> f b. Many "container" types are Functors, including the List type. But we’re not going to talk about "containers", per se. We’re going to explore a few of the simplest functors that we can squeeze out of the Haskell type system. Of course, I don’t know the actual name of some of these, so you’ll have to forgive me for giving them pet names instead.

Our end goal in exporing these functors is to reproduce the Conduit library by assembling pieces of it, one functor at a time. For this post, we’re just going to survey various functors, and ways to compose them. I’ll also touch lightly on how they play with the Free Monad Transformer, though serious discussion of such will be left to later posts.

> {-# LANGUAGE TypeOperators #-}
> {-# OPTIONS_GHC -Wall #-}
>
> module Fun where


## The Identity functor

> newtype Identity next = Identity next


The Identity functor trivially wraps a value. In order to implement fmap, it just applies the function directly to the value inside.

> instance Functor Identity where
>   fmap f (Identity next) = Identity (f next)


When used with the Free Monad Transformer, the Identity monad trivially grants you the ability to represent "the thing that comes next". This will be convenient for us sometime around part 5 of this series.

## The Empty functor

> data Empty next = Empty


The Empty functor contains no information. It admits the type variable, but is otherwise nothing but the trivial value, ().

> instance Functor Empty where
>   fmap _f Empty = Empty


When used with the Free Monad Transformer, the Empty functor allows you to short-circuit computation. The Free Monad Transformer works by stacking functors up, but as you can see, the Empty functor has no room for other functors to live inside of it.

## The Const functor

> newtype Const a next = Const a


The Const functor is very similar to the Empty functor, except that it contains some actual value, which remains untouched by functor operations.

> instance Functor (Const a) where
>   fmap _f (Const a) = Const a


When used with the Free Monad Transformer, the Const functor allows you to terminate computation while providing some information. Joining this functor with the Identity functor will allow us to supply information without terminating computation (because the Identity functor gives a space for the "next" computation), which will be the heart of our yield functionality.

## The (a ->) functor

> newtype Fun a next = Fun (a -> next)


Functions, as you may know, are functors. In order to fmap onto a function, simply wait until the function has acquired input and produced an output, and then map onto the function’s output.

> instance Functor (Fun a) where
>   fmap f (Fun g) = Fun (\x -> f (g x))


When used with the Free Monad Transformer, this allows us to represent inversion of control, or acquiring information from some outside source, in order to determine what to do next. This will be the heart of our await functionality.

## Composing functors

> newtype (f :.: g) x = Composed (f (g x))


Functors can be composed, and the result is a functor.

> instance (Functor f, Functor g) => Functor (f :.: g) where
>    fmap f (Composed x) = Composed $fmap (fmap f) x  I won’t be using this particular form of functor composition for future posts, but it was worth noting. Instead, let’s take a look at two other ways to combine functors: ## Combining functors (tagged union) > infixl 5 :|: > data (f :|: g) x = L (f x) | R (g x)  If I have two functors f and g, then their tagged union is also a functor. We can just tag the f x values with L and the g x values with R so that whenever we come across some data, we know which of the two functors it actually was. By case analysis, we can make a tagged union of functors also be a functor: > instance (Functor f, Functor g) => Functor (f :|: g) where > fmap f (L x) = L (fmap f x) > fmap f (R x) = R (fmap f x)  This will be very useful. While in normal code you would just use Haskell’s plain old tagged unions to define a data type: data List a = Nil | Cons a (List a) we’re not going to do that, because it’s more fun to take advantage of functory goodness. You could define a left-only or right-only functor for a tagged union if you wanted to. ## Combining functors (product) > infixl 7 :&: > data (f :&: g) x = f x :&: g x  If I have two functors f and g, then their product is also a functor: just perform the fmap on them both simultaneously. > instance (Functor f, Functor g) => Functor (f :&: g) where > fmap f (l :&: r) = fmap f l :&: fmap f r  Similar to how in Haskell you can provide multiple pieces of data to a constructor, we can use :&: to bundle information together. type Cons a = Const a :&: Identity type Nil = Empty type ListF a = Nil :|: Cons a type List a = FreeT (ListF a) Again, left-only or right-only Functor instances are possible, but unnecessary for my needs. ## Next time Next time, we’ll start by creating an implementation of Control.Pipe from the pipes package. Our Pipe type will be able to yield and await. Exercise to the reader: try it yourself before you read the next post! Here’s a little bit to get you started. {-# LANGUAGE TypeOperators #-} module Pipe where -- "cabal update && cabal install pipes" for this Free module import Control.Monad.Trans.Free import Fun newtype Then next = Then next -- Identity newtype Yield o next = Yield o -- Const newtype Await i next = Await (i -> next) -- Fun type PipeF i o = (??? :&: ???) :|: ??? type Pipe i o m r = ??? PipeF i o ??? yield :: o -> Pipe i o m () yield o = ??? await :: Pipe i o m i await = ??? To play with this code, download Fun.lhs. Posted in Uncategorized | Leave a comment ## Breaking from a Loop with ContT Sort of in response to Breaking from a Loop. Sometimes I wish Haskell culture embraced ContT more. > import Control.Monad.Cont > import Control.Monad.State > import Control.Monad.IO.Class  loopForever Suppose I wanted to break out of loops using a user-defined label for "break". First, assume I am working with some data type LoopT: data LoopT m a = ... runLoopT :: Monad m => LoopT m a -> m a And suppose I wanted to write something like the following: > getNonNullLine :: IO String > getNonNullLine = runLoopT$ loopForever $\break -> do > str <- liftIO getLine > when (not$ null str) $break str  What type would loopForever have to have in order to make this work? loopForever :: Monad m => ( (a -> LoopT m ()) -- label -> LoopT m b -- loop body ) -> LoopT m a -- type of whole expression Hrm… now how are we going to implement LoopT, runLoopT, and loopForever? Well gee, the type of loopForever sure looks familiar… in fact, it’s nearly identical to callCC! callCC :: MonadCont m => ((a -> m b) -> m b) -> m a It turns out that implementing it in terms of callCC and forever is trivial: > loopForever :: MonadCont m => ((a -> m c) -> m b) -> m a > loopForever f = callCC (forever . f)  > runLoopT :: Monad m => ContT a m a -> m a > runLoopT = flip runContT (\a -> return a)  The reasoning is straightforward: I want to callCC on f, but first, I want to apply "forever" to the "body" of f, hence callCC (forever . f). Another silly example using State: > untilS :: MonadState s m => (s -> Bool) -> (s -> s) -> m s > untilS test inc = runLoopT$ loopForever $\break -> do > s <- get > when (test s)$ break s
>   put $! inc s  Testing: ghci> flip runState 3$ untilS (==5) (+1)
(5,5)


Foreach

Breaking out of ‘foreach’ loops can be written just as easily:

> foreachExample :: IO ()
> foreachExample = runLoopT_ $do > foreach_ [1 .. 3]$ \breakI i -> do
>     foreach_ [4 .. 6] $\breakJ j -> do > when (j == 6)$ breakJ ()
>       liftIO $print (i, j)  > foreach_ :: MonadCont m => [i] -> ((() -> m c) -> i -> m b) -> m () > foreach_ is f = callCC (forM_ is . f)  > runLoopT_ :: Monad m => ContT () m a -> m () > runLoopT_ = flip runContT (\_ -> return ())  Notice how similar foreach_ is to loopForever. We just modify the body of f in a different way, this time applying forM_ is. The plumbing is slightly different, since foreach_ loops are used exclusively for their side effects. Writing a version of foreach that uses forM instead of forM_ is left as an exercise for the reader. Testing: ghci> foreachExample (1,4) (1,5) (2,4) (2,5) (3,4) (3,5)  There are obviously different tradeoffs to using ContT, and I certainly do endorse using EitherT and MaybeT for such things. I just find it lamentable that ContT is treated like the ugly duckling. ContT is not to be feared. To play with this code, download loop.lhs. Posted in Uncategorized | Leave a comment ## The Long and Epic Journey of LambdaCase On December 16, 2005, Haskell Prime trac ticket #41 was born, with the humble title add LambdaCase. The description field contained a pointer to the LambdaCase wiki article, whose contents are also quite humble: case statements as first order a simple desugaring case of arms gets desugared to \x -> case x of arms That’s right. Six and a half years ago, Isaac Jones was thinking about how Haskell would be better if it had LambdaCase. Some of you readers might have been right there with him six years ago — or more — wishing for this simple extension to Haskell syntax. Well the truth is, it almost certainly goes back much farther than that. Trac history records that Haskell Prime tickets 13 through 74 were all created that very same day by ijones; presumably they were copied over from some pre-existing system. As someone who only started learning Haskell in late 2010, anything that predates GHC 6.12 is ancient history for me, so you’ll have to ask one of the old sages for details about the true origins of the idea of LambdaCase. Fast forward to October 2, 2010, about two months before I started reading Learn You a Haskell and falling in love with a new language. batterseapower opened GHC trac ticket #4359 entitled Implement lambda-case/lambda-if. He provided a patch implementing behavior similar to that described in Haskell Prime trac ticket #41, bundled with a similar feature for if-then-else syntax. Prelude> (if then "Haskell" else "Cafe") False "Cafe" Prelude> (case of 1 -> "One"; _ -> "Not-one") 1 "One" Discussion ensued. Would merely providing mcase be a better option? Should this feature serve as "a standard lambda that can discriminate patterns on the value being bound"? Simon Marlow impulsively supported lambda-case and lambda-if at first, but then revoked his support, stating that: the downside is that we have to adjust our mental parsers/typecheckers to recognise if then and case of as a lambda, and I’m not sure the gain in brevity is worth the loss of readability. Simon Peyton-Jones joined with Marlow in scepticism of the initial proposed syntax, stating that: My own gut feel is that it’s a lot better to have an initial symbol to say "here comes a function". So I’m keener on the multi-clause lambda idea, if someone can devise a syntax that works. The Simons put their heads together and suggested the following syntax: \case { blah } ==> \x -> case x of { blah } Peyton-Jones suggested \of as a potential alternative to \case, which would avoid creating a new "layout herald", given of is already a layout herald. The \of idea seems to be popular in a more recent discussion; we’ll get to that later. All the while, discussions on mailing lists were taking place. Max Bolingbroke notified Haskell Cafe of the ticket, resulting in an explosion of ideas and discussion. Back on trac, it was suggested that LambdaCase could support multiple arguments: \case { (Just x) (Just y) -> x + y; _ _ -> 1 } ==> \a b -> case (a, b) of { (Just x, Just y) -> x +y; (_, _) -> 1 } Now, I believe there were two grave mistakes that caused this ticket to be derailed. First, it introduced a patch that implemented two separate features: lambda-case and lambda-if. Second, as it was discussed, the scope of it never seemed to stop growing. People kept dreaming up new features that could be added into the original idea, causing it to swell in complexity, making it difficult to pinpoint what exactly should be implemented and whether it was a good idea at all. Fast-forward again to April 2011. Following a few recent touches to the ticket, Mikhail Vorozhtov introduced anoter patch, implementing single-argument \case syntax. The Simons again chimed in, this time less helpful than the last, they both led conversation to tangential topics. Marlow expressed interest in multi-argument \case, while Peyton-Jones suggested simply using \ instead of \case, and therefore making \ a layout herald. The \ idea received mostly negative feedback, given that it would cause backwards incompatibilities in the way \ works. Nevertheless, Peyton-Jones made an important comment in the midst of this portion of the discussion: … every part of GHC from the parser onwards already implements lambda-case! … All that lambda-case does is provide concrete syntax to let you get at the abstract syntax that is already there. … So I think lambda-case is a fine candidate to fill the syntactic gap for now, leaving something more ambitious for the future. "Something more ambitious" referred to a compositional approach to pattern matching, and I couldn’t agree more: that we need to fill the gap with lambda-case now, and leave the ambitious, compositional solution for the future. On May 12, 2011, Simon Marlow gave a deep stamp of approval to the newly-submitted patch when he said: Patch looks great, thanks Mikhail! We just need a test, and we can put it into 7.2.1. Despite this, the milestone got pushed back from 7.2.1 to 7.4.1. At this point, despite the negative feedback, several people seemed to be pushing for the simpler \ syntax. SPJ suggested that to avoid conflicts, the feature could be made available only with explicit layout (using curly braces and semicolons). However, Simon Marlow expressed dislike for explicit layout only, and concluded: Perhaps the only way to do this is to have a new keyword. A few more suggestions floated about, and the milestone was punted from 7.4.1 to 7.6.1. Fast forward to May 2012. SPJ downgraded the priority "until we can find a better solution". Mikhail, ever the champion of the cause, wisely suggested splitting multi-arg \case into a separate ticket. SPJ noted that "Simon and I are deeply under water with other aspects of GHC, which makes it hard to devote attention to new features." Simon Marlow requested "a summary of the previous proposals and their pros and cons, so that we don’t have to rediscover all this." (Note that this document is not geared towards achieving that goal. Rather, my goal with this document is to merely expose the long and epic journey of LambdaCase to the public eye.) Marlow also suggested: Another alternative is to introduce a new keyword for anonymous multi-clause multi-argument functions, e.g. fun Fast forward again to July 5, 2012. Mikhail created a wiki page, LambdasVsPatternMatching, essentially to fulfill Marlow’s request of reviewing the pros and cons of each of the major suggestions. Notably, Mikhail concludes from his personal experience that multi-arg \case was entirely unnecessary; single-arg \case was entirely sufficient, and in those few rare circumstances where multiple matches were desirable, curry$ \case ... was sufficient.

Mikhail provided new patches, and started a thread on the GHC Users mailing list. The usage of parens were discussed, and Simon’s \of idea came to light again, gathering strong support from various people. A few (including myself) find this particular idea acceptable, though "a little weird", and Mikhail stated:

Do you think that adding "\" + "case" as a layout herald would complicate the language spec and/or confuse users? Because it certainly does not complicate the implementation (there is a patch for \case already). IMO \case is more descriptive, "of" is just a preposition after all.

Additional details regarding layout rules and "comma sugar" are discussed in that thread.

Fast forward to today. The discussion is ongoing. You have the opportunity to let your voice be heard. Chime in on the mailing list. Code review the current proposed patch. But promise me one thing. Do you want LambdaCase, in any shape or form, to make it into GHC 7.6.1? Then make yourself heard. Make it happen. We’re in the process of writing the latest chapter in LambdaCase history. Let’s not punt it to the next GHC release. Let’s make this chapter the one where it actually gets released. And let’s not stop there. It’s been over six years. It’s time for dreams of Haskell Prime to start coming true. What will it take to get this feature into the Haskell language itself? Let’s keep moving forward, and see that happen for ourselves.

Posted in Uncategorized | 4 Comments

## Generalizing the fibonacci sequence

In response to Daily Programmer Challenge #71 intermediate.

You’ve probably seen the classic Haskell one-liner:

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

Let’s generalize it to work with this problem. Since I chose to use Integers everywhere, I’ll need lots of genericFoo from Data.List.

> import Data.List


Now first let’s generalize zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]. The zipping function, instead of taking 2 inputs, will take K inputs. Then, instead of giving it 2 lists, we will give it K lists. It will be slightly less general, in that all K inputs will have the same type a, rather than differing types a and b.

Let’s use a list of length K to encode this sort of input. Therefore, (a -> b -> c) becomes (List K a -> c), and [a] -> [b] -> [c] becomes List K [a] -> [c]. However, I won’t actually bother encoding how long the list is into the type system, so it’ll just be [a] -> c and [[a]] -> [c] respectively.

I will implement it by taking in some function f, and some list xss. The first entry of the resultant list will be the result of applying f to all the first entries of xss, and so forth:

listZip :: ([a] -> b) -> [[a]] -> [b]
listZip _ []  = []
listZip f xss
| null (head xss) = []
| otherwise = f (map head xss) : listZip f (map tail xss)

Actually, there’s an easier way to implement it, using Data.List:

> listZip :: ([a] -> b) -> [[a]] -> [b]
> listZip f = map f . transpose


Now, I must generalize (+) to work on lists. The obvious generalization is sum. I’m making one additional tweak, which is to calculate the sum modulo M.

> sumMod :: Integer -> [Integer] -> Integer
> sumMod m = foldl' (\x y -> (x + y) rem m) 0


The generalization of tail is already written for me: it is tails from Data.List. Now to generalize the rest of fibs. I’ll parameterize it by M (the modulus) and K (as described earlier), as follows:

> fibSeq :: Integer -> Integer -> [Integer]
> fibSeq m k = fibs
>  where
>   fibs = genericReplicate (pred k) 0 ++
>          1 :
>          (listZip (sumMod m) $genericTake k$ tails fibs)


From here the desired function f as specified in today’s problem is simple:

> fibSeqAt :: Integer -> Integer -> Integer -> Integer
> fibSeqAt m k n = fibSeq m k genericIndex n


This code therefore works by lazily constructing the Kth Fibonacci sequence (modulo M), and then inspecting its Nth element. Modular arithmetic assures that aggressive truncation still preserves the same truncated sum.

Testing:

ghci> mapM_ (print . take 20 . fibSeq 100) [1 .. 5]
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
[0,1,1,2,3,5,8,13,21,34,55,89,44,33,77,10,87,97,84,81]
[0,0,1,1,2,4,7,13,24,44,81,49,74,4,27,5,36,68,9,13]
[0,0,0,1,1,2,4,8,15,29,56,8,8,1,73,90,72,36,71,69]
[0,0,0,0,1,1,2,4,8,16,31,61,20,36,64,12,93,25,30,24]

ghci> fibSeqAt (10^8) 100 10000
93981304


This solution is still too slow, however, to reasonably compute fibSeqAt (10^8) (3^13) (5^10).