First we need to get a grip on what we mean by powerful etc. Suppose we had a class GripHandle
for things which had a grip handle, and another class Screwdriver
for screwdrivers. Which is more powerful?
OK, that's a silly argument, but it raises a good point about how the phrase "You can do more with a _____" is rather ambiguous.
- Is there any truth to viewpoint A?
Some. See above. There's some truth in B too. Both are misleading in some way or other.
What kind of functionality do arrows not have, I've read that the difference has to do with composition, so what does the >>> operator allow us to do that >>= doesn't?
In fact >>=
allows you do to more than >>>
(more interface-supplied capability). It allows you to context-switch. This is because Monad m => a -> m b
is a function, so you can execute arbitrary pure code on the input a
before deciding which monadic thing to run, whereas Arrow m => m a b
isn't a function, and you've decided which arrow thing is going to run before you examined the input a
.
monadSwitch :: Monad m => m a -> m a -> (Bool -> m a)
monadSwitch computation1 computation2 test
= if test then computation1 else computation2
It's not possible to simulate this using Arrow
without using app
from ArrowApply
What does app exactly do? it's type doesn't even have an (->)
It lets you use the output of an arrow as an arrow. Let's look at the type.
app :: ArrowApply m => m (m b c, b) c
I prefer to use m
to a
because m
feels more like a computation and a
feels like a value. Some people like to use a type operator (infix type constructor), so you get
app :: ArrowApply (~>) => (b ~> c, b) ~> c
We think of b ~> c
as an arrow, and we think of an arrow as a thing which takes b
s, does something and gives c
s. So this means app
is an arrow that takes an arrow and a value, and can produce the value that the first arrow would have produced on that input.
It doesn't have ->
in the type signature because when programming with arrows, we can turn any function into an arrow using arr :: Arrow (~>) => (b -> c) -> b ~> c
, but you can't turn every arrow into a function, thus (b ~> c, b) ~> c
is usable where (b ~> c, b) -> c
or (b -> c, b) ~> c
would not be.
We can easily make an arrow that produces an arrow or even multiple arrows, even without ArrowApply, just by doing produceArrow :: Arrow (~>) => (b ~> c) -> (any ~> (b ~> c))
defined with produceArrow a = arr (const a)
. The difficulty is in making that arrow do any arrow work - how to you get an arrow that you produced to be the next arrow? You can't pop it in as the next computation using >>>
like you can do with a monadic function Monad m => a -> m b
(just do id :: m a -> m a
!), because, crucially, arrows aren't functions, but using app
, we can make the next arrow do whatever the arrow produced by the previous arrow would have done.
Thus ArrowApply gives you the runtime-generated computation runnability that you have from Monad.
Why would we ever want to use applicative arrows over monads?
Er, do you mean Arrows or Applicative Functors? Applicative Functors are great. They're more general than either Monad or Arrow (see the paper) so have less interface-specified functionality, but are more widely applicable (get it? applicable/applicative chortle chortle lol rofl category theory humor hahahaha).
Applicative Functors have a lovely syntax that looks very like pure function application. f <$> ma <*> mb <*> mc
runs ma
then mb
then mc
and applies the pure function f
to the three results. For example. (+) <$> readLn <*> readLn
reads two integers from the user and adds them.
You can use Applicative to get the generality, and you can use Monads to get the interface-functionality, so you could argue that theoretically we don't need them, but some people like the notation for arrows because it's like do notation, and you can indeed use Arrow
to implement parsers that have a static component, thus apply compile-time optimisations. I believe you can do that with Applicative, but it was done with Arrow first.
A note about Applicative being "less powerful":
The paper points out that Applicative
is more general than Monad
, but you could make Applicative functors have the same abilities by providing a function run :: Applicative f => f (f b) -> f b
that lets you run a produced computation, or use :: Applicative f => f (a -> f b) -> f a -> f b
that allows you to promote a produced computation to a computation. If we define join = run
and unit = (<$>)
we get the two functions that make one theoretical basis for Monads, and if we define (>>=) = flip (use.pure)
and return = unit
we get the other one that's used in Haskell. There isn't an ApplicativeRun
class, simply because if you can make that, you can make a monad, and the type signatures are almost identical. The only reason we have ArrowApply
instead of reusing Monad
is that the types aren't identical; ~>
is abstracted (generalised) into the interface in ArrowApply but function application ->
is used directly in Monad. This distinction is what makes programming with Arrows feel different in many ways to programming in monads, despite the equivalence of ArrowApply and Monad.
< cough > Why would we ever want to use Arrows/ArrowApply over Monad?
OK, I admit I knew that's what you meant, but wanted to talk about Applicative functors and got so carried away I forgot to answer!
Capability reasons: Yes, you would want to use Arrow over Monad if you had something that can't be made into a monad. The motivating example that brought us Arrows in the first place was parsers - you can use Arrow to write a parser library that does static analysis in the combinators, thus making more efficient parsers. The previous Monadic parsers can't do this because they represent a parser as a function, which can do arbitrary things to the input without recording them statically, so you can't analyse them at compile time/combine time.
Syntactic reasons: No, I personally wouldn't want to use Arrow based parsers, because I dislike the arrow proc
/do
notation - I find it even worse than the monadic notation. My preferred notation for parsers is Applicative, and you might be able to write an Applicative parser library that does the efficient static analysis that the Arrow one does, although I freely admit that the parser libraries I commonly use don't, p