|
This chapter is taken from the paper [Y. Bruned, M. Hairer, L. Zambotti, Invent. Math. 215 (2019), no. 3, 1039–1156].
A rooted tree is a finite connected simple graph without circles with a distinguished vertex , called the root. We assume that our trees are combinatorial, i.e. there is no particular order imposed on edges leaving any given vertex.
Vertices of is also called nodes. The set of nodes of the tree is denoted by , and set of edges the tree is denoted by . We endow with the partial order where iff is on the unique path connecting to the root, and we orient edges in so that if , then . In this way, we can always view a tree as a directed graph.
Given a sequence of vector space , denote by
called a bigraded space. Given two bigraded spaces and , define
is s.t. unless (i.e. & ). Given two bigraded spaces and , we define s.t.
can be view as a bigraded space with
Let . Define
. Denote by the free vector space generated by . we are given a (finite) collection of subforests of s.t. . Define by
Define the linear functional by . If is equal to all subforests of containing , then is a coalgebra. Since the inclusion endows the set of typed forests with partial order, is an example of an incidence coalgebra.
Given a typed forest , consider a several disjoint subforest , ,, of . A natural way to code is to use a coloured forest where
Then we have for and .
Assumption 1. Let . For each colour forest , we are given a collection of subforests of s.t.
& ∀,
, connected component of , one has either or .
We also assume that is compatible with forest isomorphisms described above in the sense that
In this chapter, we consider the equation in the full sub-critical regime. The equation is formally given by
where the term represents the renormalisation (it can be quadratic in ) and the noise for some . This chapter is a note taken from the paper [A. Chandra, A. Moinat, H. Weber, Arch. Ration. Mech. Anal. 247 (2023), no. 3, Paper No. 48].
We introduce the following ingredients.
An operator to represent an abstract Duhamel operator for heat equation.
Let be the smallest set containing the symbols s.t. “”.
. We use to indicate elements in . We call the elements in trees. In a given tree, occur in that tree are called leaves in the tree.
.
Define functions which counts, on any given tree, the number of occurrences of , and as leaves in the tree. Denote also by .
.
, , , .
,
consists of s.t. and .
.
.
.
.
This note is extended from the a course called MAGIC 109 given by Dr Y. Bazkov, adding some materials from the following resources:
M. E. Sweedler. Hopf algebras, Math. Lecture Note Ser. W. A. Benjamin, Inc., New York, 1969, vii+336 pp.
We intend to study the intrinsic geometric property of the real line by looking at how functions on the real line can transform. For more algebraic setting, we only consider polynomials over real variable. Let us look at the following transforms.
Given a real number and is a function on the real line, we define the following translation by :
Define also the reflection
and the derivative
The translation, reflection, derivative are not independent from each other. In fact, there are some relations:
(A.1.1) |
(A.1.2) |
(A.1.3) |
Proof of (A.1.1).
Also,
(A.1.4) |
(A.1.5) |
(A.1.6) |
Now we write down the relation (A.1.1)-(A.1.2)-(A.1.3)-(A.1.4)-(A.1.5)-(A.1.6) for , , and , so these transformations live their own lifes without the mention of the functions they act on. But the algebraic structure given by (A.1.1)-(A.1.2)-(A.1.3)-(A.1.4)-(A.1.5)-(A.1.6) is too “blank”, it does not contain enough information for us to effectively work with these abstract symbols as symmetries of the function space on the real line. What is missing here is how the operators , , and behave when we apply them to a product of functions. Given functions and ,
(A.1.7) |
To get rid of the mention of and in (A.1.7), we introduce a new symbolic notation coproduct , satisfying
Likewise, we also require
and the Leibniz rule
The meaning of will become clear later in this note.
A Hopf algebra , roughly speeking, is a collection of operators which has
rules for multiplication (i.e., how to compose operators);
rules for coproduct (i.e., how an operator acts on a product of functions);
rules for The Antipode (roughly an analogue of inverse in group).
In the example we study in Subsection A.1.1, we say that the operators , , and generate some Hopf algebra. However, these operators are known even from 19 centrary, what is the point to rename them? Hopf algebras gain an advantage over groups when we need symmetries of a noncommutative space. The noncommuting objects upon which we want to act can be thought of as “functions” on a mythical “quantum group”.
We consider vector space over field (sometimes we might use or ). Recall that a linearly independent set is such that for any , , implies . Given a subset of , say , the span of is defined by . A basis of is linearly independent subset of such that . By using Zorn's lemma, one can prove that every vector space has a basis. All bases of are of the same cardinality which is denoted by . We can also argue that every set is a basis of some vector space. Indeed, let be a set, and let
Moreover, we can identify in with an element in , which is , and one can check that under this identification the set becomes a basis of . This fits a philosophy we have to adhere in this note: everything is vector space; everything map is a linear map.
It is time to state the first result in this note, a result that we will used later.
Proposition
We will need a number of constructions which allows us to create new vector spaces from the existing ones. For example, direct product and direct sum of vector spaces means
and
respectively. Let us note that for any , is a subspace of . Let us also remark that direct product and direct sum can be defined for an uncountable family of vector spaces. However, we only consider countable familys in this note.