# [2021.06.30] Computational Graphs

Everyone knows that neural networks are computational graphs. They
draw them everywhere (see, for example,
https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#computational-graph).
One from the ``pytorch`` documentation is particularly interesting
because it has a node representing subtraction. If one reflects this
graph in the mirror, one will get an isomorphic graph. But the
function behind it will change, as will its gradient since
subtraction is not invariant under a permutation of its arguments.
But graphs, in the general case, know nothing about the order of
their edges. Of course, one can add weights to the edges, or some
labels, or whatever. But if one doesn't say anything but DAG
(directed acyclic graph), it doesn't presume any edges ordering. And
that is what happens in deep learning worlds. They also talk about
DAGs but assumes some order. Of course, if one stores graphs as a
root node equipped with a _list_ of adjacent nodes (each having the
same structure by recursion), there is no such problem. But that's
more than a simple graph. If one, for example, reformats the same
graph to a list of _edges_ (an edge being a tuple of nodes in this
case), the information of edges striking from the same node will be
lost.

Inside the same field, for so-called graph neural networks, they
often store graphs as lists of edges. So applying that to infamous
'computational graphs' become hard. For example, if one wants to
represent a logical formula as a graph, one needs additional nodes
and edges to depict ordering between predicate's arguments. It's
funny how things on hype, which everyone tries to 'democratise', give
birth to such profanations. In a tremendous abundance of scientific
facts nowadays, most of us mostly have nothing to do but think using
stereotypes.