Magic Detach Function
In this post, I won’t talk about how to use detach method in pytorch. Instead, I will discuss the function inspired by pytorch’s detach method (or stop_gradient method in tensorflow) from mathematical perspective. IMHO, this function is more than a small helpful tool in machine learning libraries, but it provides infinite possibilities to the world of new math and physics.
Following the convention of pytorch, we also call our function detach function , it is defined as
You may feel ok with such a werid function if you are from ML community as it is nothing but stop_gradient in tf or detach in torch. From implementation perspective, such a function is not suprising at all, either. It means that the corresponding op node doesn’t further back propagate the gradients, which is a common need in ML practice.
However, if you are from math or physics community, you might feel uncomfortable about this function. How could instead of 1, there is no function satisfing the definition (1). Yes, and you also cannot write down a number , where . You may argue that has its own backgound and rationale, you will see later that our detach function has these arguments, too.
You may also be curious why we define a function exactly like (1) instead of letting the forward part as or anything you like. This is in the same spirit of why you don’t define some number as : every number defined in this way can be reexpressed by the combination of real number and . Similary, as we can see below, every “weird” function (whose original value and derivatives don’t match in common sense) can be expressed by normal functions and . Namely, this function is a unit to greatly enlarge the function domain; just like enlarge the real number domain to the complex one. You might find the analogy between and is very helpful to understand the story here. (Yes, I think I may happen to find something as important as !)
Thm1: For a “weird” function, whose value and every order of derivatives is defined alone, it can always be expressed by normal functions together with detach function .
Let’s firstly see some examples and get a taste on how “weird” can these “weird” functions be.
a. , the value of it is always 0, but it still has derivatives in terms of x, which is .
b. , this function is equivalent to in values, but its derivative is instead of !
c. , this function is always 1, but !
In a word, weird thing can generate much more weird stuff!
Now, back to the proof of Thm1. Suppose we define any “weird” function in each order of derivatives as
In our context, all normal letters are reserved for “normal” functions, the above equation means that the n-th order derivatives of is the same as the n-th order derivatives of some normal function ( order is for the original value of the function).
If we have the ability to construct function , whose value of each order of derivatives are all zero, except that , then the final construction for is just . Therefore, we only need to show that type function can be constructed from normal functions and .
The strategy is to write down terms in one by one. Starting from the analysis of the n-th order derivative, to make sure there is a in this order, we should write down for . Then, check (n-1)-th order derivative. We want to keep the result to zero in this order, but we are leaving with from current . So we need to add in this order, which can be integrated back as a new term in . Now, check equality in (n-2)-th order derivative. We are again expecting zero, but there are now left. Again, we add terms to cancel them in this order as . We can integrate this term back and add it to as . Following this strategy here, the construction of can be done. QED
Update: We don’t need such a complicated construction proof above. A much simpler way:
And the above formula is straightforward to generalize to multi-variate case.
When is introduced as a number, the meaning of has been changed. It is now two equations for , i.e. . Similary, when is introduced as a function, the meaning of has also been enriched. For , it now means that . In other words, there are two types of equations now: one only holds for the value, and the other one works for each order of derivatives.
For example, , but . Therefore, one should be more careful when calculating or proving something with on the stage.
Find a function such that
It would be very hard to find a normal function satisfying (2), but if is allowed, f is just . This stuff is not just playing with symbols and making no sense. On the contrary, such detach function is very easy to implement in computer program, rendering the power of such formalism to all practical numerical methods! For a function that simplifies things a lot analytically and can be utilized in computational program, it is just like a free lunch!
The above question (3) has its background, which originates from gradient estimation on Monte Carlo expectation values (see this paper if you want to learn more).
We can also get some insights on why detach function can emerge in realistic problems. We again use Monte Carlo for an example. To measure some expectation value, we have (1/N or 1/Ns is omitted in all sum notation)
The above value cannot be differentiated directly. The corrected object function which can be automatically differentiated is . How could we understand this without the derivation from (3)? Actually, the object function is very natural. Say we try to use samples from to estimate the expectation on , we do something like:
The same object function as the one ready for AD! If we approach to the derivative limit, is then approaching , and the derivative only applies to the numerator (that’s the only place for ). rhs in (5) is just the origin of . And this also justify why we need “weird” function as detach.
The first thing comes to my mind is to generalize the whole formalism to multivariate functions. What does the unit function looks like in that case? Or is our detach function here also enough for the construction of “weird” multivariate functions?
Update: I have solved this problem, and detach function here is actually enough to define every multivariate function with every order or derivatives customized (the construction proof is very similar to the proof above in completeness thm). This again shows the great express power of .
Besides, how can this detach function be utilized in more broad fields of modern math and science? Can such a function drastically simplify some involved formalisms, such as higher order Feynman diagrams in quantum field theory? I believe there are many exciting directions to explore there.