TF uses automatic differentiation and more specifically reverse-mode auto differentiation.
There are 3 popular methods to calculate the derivative:
- Numerical differentiation
- Symbolic differentiation
- Automatic differentiation
Numerical differentiation relies on the definition of the derivative: , where you put a very small h
and evaluate function in two places. This is the most basic formula and on practice people use other formulas which give smaller estimation error. This way of calculating a derivative is suitable mostly if you do not know your function and can only sample it. Also it requires a lot of computation for a high-dim function.
Symbolic differentiation manipulates mathematical expressions. If you ever used matlab or mathematica, then you saw something like this
Here for every math expression they know the derivative and use various rules (product rule, chain rule) to calculate the resulting derivative. Then they simplify the end expression to obtain the resulting expression.
Automatic differentiation manipulates blocks of computer programs. A differentiator has the rules for taking the derivative of each element of a program (when you define any op in core TF, you need to register a gradient for this op). It also uses chain rule to break complex expressions into simpler ones. Here is a good example how it works in real TF programs with some explanation.
You might think that Automatic differentiation is the same as Symbolic differentiation (in one place they operate on math expression, in another on computer programs). And yes, they are sometimes very similar. But for control flow statements (`if, while, loops) the results can be very different:
symbolic differentiation leads to inefficient code (unless carefully
done) and faces the difficulty of converting a computer program into a
single expression
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…