Given:
Applicative m, Monad m => mf :: m (a -> b), ma :: m a
it seems to be considered a law that:
mf <*> ma === do { f <- mf; a <- ma; return (f a) }
or more concisely:
(<*>) === ap
The documentation for Control.Applicative
says that <*>
is "sequential application," and that suggests that (<*>) = ap
. This means that <*>
must evaluate effects sequentially from left to right, for consistency with >>=
... But that feels wrong. McBride and Paterson's original paper seems to imply that the left-to-right sequencing is arbitrary:
The IO monad, and indeed any Monad, can be made Applicative by taking pure
=
return
and <*>
= ap
. We could alternatively use the variant of ap
that performs
the computations in the opposite order, but we shall keep to the left-to-right order
in this paper.
So there are two lawful, non-trivial derivations for <*>
that follow from >>=
and return
, with distinct behavior. And in some cases, neither of these two derivations are desirable.
For example, the (<*>) === ap
law forces Data.Validation to define two distinct data types: Validation
and AccValidation
. The former has a Monad
instance similar to ExceptT, and a consistent Applicative
instance which is of limited utility, since it stops after the first error. The latter, on the other hand, doesn't define a Monad
instance, and is therefore free to implement an Applicative
that, much more usefully, accumulates errors.
There's been some discussion about this previously on StackOverflow, but I don't think it really got to the meat of the question:
Why should this be a law?
The other laws for functors, applicatives and monads—such as identity, associativity, etc.—express some fundamental, mathematical properties of those structures. We can implement various optimizations using these laws and prove things about our own code using them. In contrast, it feels to me like the (<*>) === ap
law imposes an arbitrary constraint with no corresponding benefit.
For what it's worth, I'd prefer to ditch the law in favor of something like this:
newtype LeftA m a = LeftA (m a)
instance Monad m => Applicative (LeftA m) where
pure = return
mf <*> ma = do { f <- mf; a <- ma; return (f a) }
newtype RightA m a = RightA (m a)
instance Monad m => Applicative (RightA m) where
pure = return
mf <*> ma = do { a <- ma; f <- mf; return (f a) }
I think that correctly captures the relationship between the two, without unduly constraining either.
So, a few angles to approach the question from:
- Are there any other laws relating
Monad
and Applicative
?
- Is there any inherent mathematical reason for effects to sequence for
Applicative
in the same way that they do for Monad
?
- Does GHC or any other tool perform code transformations that assume/require this law to be true?
- Why is the Functor-Applicative-Monad proposal considered such an overwhelmingly good thing? (Citations would be much appreciated here).
And one bonus question:
- How do
Alternative
and MonadPlus
fit in to all this?
Note: major edit to clarify the meat of the question. Answer posted by @duplode quotes an earlier version.
See Question&Answers more detail:
os