Indefinite sum
Inverse of a finite difference
From Wikipedia, the free encyclopedia
In the calculus of finite differences, the indefinite sum operator (also known as the antidifference operator), denoted by or ,[1][2] is the linear operator that is the inverse of the forward difference operator, :
It relates to the forward difference operator as the indefinite integral relates to the derivative. In the same way that the indefinite integral solves for the family of functions that differentiate to , the indefinite sum solves for which family of functions have as their forward difference.
If , then satisfies the functional equation
Applying the forward difference operator to an indefinite sum returns the original function:[3]
In the notation , the variable plays the same role as the index variable in a discrete sum; it indicates the argument at which the antidifference is to be evaluated. The subscript acts as a placeholder, analogous to the in , and specifies that the antidifference is a function of .
The solution is not unique: if is one solution, then for any 1‑periodic function (i.e., ), the function is also a solution. Therefore, an indefinite sum is unique up to a 1-periodic function instead of up to a constant as the indefinite integral is.
To obtain the unique solution up to a constant , one must impose additional constraints. The Nørlund principal solution is the unique analytic solution that has the minimal possible exponential type, filtering out any non‑constant periodic component.[4]
Forward and backward difference conventions

The inverse forward difference operator, , extends the summation up to , typically starting with the iterator at :
Some authors analytically extend summation for which the upper limit is the argument without a shift, typically starting the iterator at :[5][6][7]
In this case, the analytic continuation, , for the sum is a solution of . Stated explicitly, that is:
Which follows from the discrete counterpart:
Some authors use the equivalent form called the telescoping equation:[8]
The lower bounds of the discrete analog for both inverse forward difference and inverse backward difference can be an arbitrary constant other than those listed here, as it is absorbed into the height of the 1-periodic or constant term .
Fundamental theorem of the calculus of finite differences
Indefinite sums can be used to calculate definite sums with the formula:[9]
Alternatively, using the inverse backward difference operator, the relation is:
Examples
The following basic indefinite sums follow from the fundamental properties of the difference operator, where represents an arbitrary 1-periodic function (or a constant if the Nørlund principal solution is assumed):[10]
- Constant:
- Exponential:
- Logarithm:
Falling factorials
Falling factorials provide the discrete analog of the power rule from differential calculus. In infinitesimal calculus, . In the calculus of finite differences, the falling factorial
plays the role of , and the forward difference operator satisfies
The indefinite sum of a falling factorial is given by the discrete analog of the power rule for integration:
Equivalently, using the Gamma function:
For the case where , the solution is the digamma function with a shift, , which naturally extends the harmonic numbers.
Example: Sum of the first squares. Using and the indefinite sum formula above,
Applying the fundamental theorem of the calculus of finite differences,
Expanding the falling factorials,
and simplifying yields the formula
Summation by parts
Indefinite summation by parts is the discrete analog of integration by parts. It is derived from the product rule for the forward difference operator.
Product rule. For two functions and , the product rule for the forward difference is:
Introducing the shift operator , defined by , this can be written more compactly as:
Summation by parts. Rearranging the product rule gives:
Taking the indefinite sum of both sides and using the fact that (where is an arbitrary 1‑periodic function) yields the formula for summation by parts:[11][10]
A symmetrical form, also obtained from the product rule, is:
Definite summation by parts. For definite sums from to , the formula becomes:
Example: product of a polynomial and an exponential[12]
Summation by parts is effective for functions like . To find the indefinite sum , let and . Then:
Applying the summation by parts formula:
The remaining sum is elementary:
Hence the indefinite sum (antidifference) is
To evaluate the definite sum from to , we use the fundamental theorem with the forward difference inverse:
Substituting the expression for :
Thus, for any non‑negative integer ,
Uniqueness of the principal solution
The functional equation does not have a unique solution. If is a particular solution, then for any function satisfying (i.e., any 1-periodic function), the function is also a solution. Therefore, the indefinite sum operator defines a family of functions differing by an arbitrary 1-periodic component, .
To select the unique principal solution (German: Hauptlösung)[4] up to an additive constant (instead of up to the additive 1-periodic function ) one must impose additional constraints.
Complex analysis (exponential type)
Following the theory developed by Niels Erik Nørlund,[4] the indefinite sum can be uniquely determined for analytic functions by imposing restriction on their growth in the complex plane. Specifically, by imposing minimal growth, the non-constant periodic terms can be filtered out.
Suppose is analytic in a vertical strip containing the real axis, and let be an analytic solution of in that strip. To ensure uniqueness, require to be of minimal growth, specifically to be of exponential type less than in the imaginary direction. That is, there exist constants and such that as .[13][14]
Let and be two analytic solutions satisfying this growth condition. Their difference is then analytic, 1‑periodic (i.e., ), and inherits the same exponential type less than .
Nørlund uses a fundamental result in complex analysis (related to Carlson's theorem, the Phragmén–Lindelöf principle, and the Paley–Wiener theorem) which states that a non‑constant periodic entire function must have exponential type at least .[4] This follows from its Fourier series expansion: if is non‑constant, its Fourier series contains a term with , which has type . Since has type strictly less than , it cannot contain any such term and therefore must be constant.
The exponential type less than in the imaginary direction on condition is sufficient but not strictly necessary. Nørlund's general definition of the principal solution is the analytic solution having the minimal possible exponential type for the given .[4] If has exponential type in imaginary direction, then the principal solution will also have type in that strip, provided it converges. For example, has exponential type ; its principal solution exists and has type , even though .
When has exponential type exactly for some non‑zero integer in every strip where it is analytic (e.g. has type ; its antidifference contains in the denominator) the principal solution fails to exist (or is undefined everywhere) because it resonates with the kernel of the difference operator. In all other cases (i.e., when is meromorphic and its exponential type in some vertical strip is not an integer multiple of ) the principal solution exists and is uniquely determined by minimal exponential type.
Real analysis (higher‑order convexity)
In real analysis, the uniqueness condition can be given using higher‑order convexity, generalizing the Bohr-Mollerup theorem. For an integer , a function is called -convex if its divided differences of order are non‑negative, and -concave if those divided differences are non-positive. A function is called eventually -convex (resp. eventually -concave) if there exists such that it is -convex (resp. -concave) on the interval .
Marichal and Zenaïdi proved the following uniqueness theorem, their method requiring the solution to be eventually -convex or -concave.[15][16]
Theorem. Let be an integer and let satisfy . If is an eventually -convex or eventually -concave solution of , then is uniquely determined up to an additive constant. Moreover, for any ,
and the convergence is uniform on bounded subsets of .
Müller–Schleicher axiomatic method
In their paper How to Add a Noninteger Number of Terms[5], Müller and Schleicher introduced an axiomatic approach to fractional summation with a real or complex number of terms. Their method extends the classical discrete sum
to non-integer and complex upper limits . The definition is built upon six natural axioms:
- Continued Summation: .
- Translation Invariance: .
- Linearity: .
- Empty Sum Condition: (equivalent to the empty sum condition).
- Holomorphy for Monomials: for each , is holomorphic in .
- Right-Shift Continuity: if pointwise as , then ; more generally, if can be approximated by polynomials of fixed degree with , then:
- .
Axioms S1–S4 force the sum to align with the ordinary finite sum when the limits are integers. Axiom S5 forces monomials to behave the same way under the generalization of fractional sums. Axiom S6 is the crucial axiom which allows one to "step back" the asymptotic region to determine the fractional sum in a finite interval. The exact conditions for the method to work are, as stated in the Definition 1.2 of the paper:
Let and . A function will be called fractional summable of degree if the following conditions are satisfied:
- for all
- there exists a sequence of polynomials of fixed degree such that for all
- as
- for every the limit
exists.
In the simplest case when as (i.e., the approximating polynomials are zero), this reduces to:
Symmetry of the principal solution
Following directly from uniqueness, if is a meromorphic function, one can define a unique analytic solution of the backward difference sum, by imposing the conditions that:
- Difference Equation:
- Normalization: (empty sum boundary condition).
- Growth constraint: has the minimal possible exponential type in the imaginary direction.
Under these conditions, satisfies a reflection formula (referred to by Nørlund as Ergänzungssatz, a complementary theorem to uniqueness of the principal solution [Hauptlösung], presenting it as where is the span).[17]

Odd functions
If is an odd function (), the unique analytic solution satisfies:[17]
This represents a point symmetry about .
Even functions
If is an even function (), the unique analytic solution satisfies:[17]
.
Relationship to indefinite products
In the symbolic method developed by Niels Erik Nørlund and L. M. Milne-Thomson, the indefinite product operator serves as the multiplicative analog to the indefinite sum. It is defined by the first order homogeneous equation
By taking the logarithm of the product formula, one obtains the telescoping identity .[18] This allows any indefinite product to be expressed through an indefinite sum:
where is an arbitrary periodic function of period 1.[19] Conversely, an indefinite sum may be represented as the logarithm of an indefinite product:
Expansions and definitions
Newton series
For an entire function of exponential type less than [20] the inverse forward difference operator, , can be expressed by its Newton series expansion: [21][22]
- is the falling factorial.
Bernoulli‑operator series expansion
Formally, the inverse forward difference operator can be expressed in terms of the derivative operator using the exponential generating function of the Bernoulli numbers:[23][24][25]
where are the Bernoulli numbers defined by the generating function . Under this convention .
If is a polynomial, only finitely many terms of the series are non-zero as the finite difference of a monomial is a polynomial of one degree lower (following by induction, finitely many terms are required). For one obtains the antidifference:[24]
where are the Bernoulli polynomials of the first order.[24]
If admits a Maclaurin series expansion , the antidifference of monomials in the series expansion yields the formal series:[25]
For non‑polynomials this expansion is generally asymptotic.
- Relation to the inverse backward difference
If one instead expands the inverse backward difference operator, (which extends ), it admits to the same expansion, but with in place of .
Euler–Maclaurin formula
The Euler–Maclaurin formula extends :[6][13] where are the even Bernoulli numbers, is an arbitrary positive integer, and is the remainder term given by:
with being the periodized Bernoulli function related to the Bernoulli polynomials.
Laplace summation (Gregory summation formula)
Laplace's summation formula, closely related to the Gregory summation formula, can be seen as the discrete counterpart to the Euler–Maclaurin formula. The inverse forward difference :[26][27][12][28]
- where are the Cauchy numbers of the first kind.
- is the falling factorial.
Truncating the series after terms leaves a remainder that can be expressed as an integral of times a periodic Bernoulli polynomial.[12][28] In the notation of Charles Jordan, Gregory's formula is:[12]
where the coefficients are the Bernoulli numbers of the second kind. Note the argument is without a shift, aligning with the inverse backward difference.
Abel–Plana formula
The indefinite sum can be analytically continued by applying the standard Abel-Plana formula to the finite sum and then analytically continuing the integer limit to the variable . This yields the formula:[7]
This analytic continuation is valid when the conditions for the original formula are met. The sufficient conditions are:[13][14]
- Analyticity: must be analytic in the closed vertical strip between and . The formula provides the analytic solution up to, but not beyond, the nearest singularities of to the line .
- Growth: must be of exponential type less than in this strip, satisfying for some , as .
Choice of the constant term
The constant is often fixed using integral conditions, which is consistent with Bernoulli polynomials.
Let . Then, the constant is fixed from the condition or .
For example, where
Let . Then, the constant is fixed from the condition or
Alternatively, Ramanujan summation can be used: Or at 1: respectively.[29][30]