Implicit computational complexity
From Wikipedia, the free encyclopedia
Implicit computational complexity (ICC) is a subfield of computational complexity theory that characterizes programs by constraints on the way in which they are constructed, without reference to a specific underlying machine model or to explicit bounds on computational resources unlike conventional complexity theory.[1][2][3] The central goal of ICC is to identify programming formalisms — such as restricted formal languages, type systems, or recursion schemes — whose expressive power coincides exactly with a given complexity class, so that membership in the class becomes a consequence of syntactic well-formedness rather than a separate computational argument. ICC was developed in the 1990s and employs the techniques of proof theory, substructural logic, linear logic, model theory and recursion theory to prove bounds on the expressive power of high-level formal languages.[4][5] Among its founding contributions are Girard, Scedrov, and Scott’s bounded linear logic (1992),[4] Bellantoni and Cook’s function algebra based on predicative recursion (1992),[6] and Jones’s cons-free programming language characterisation of polynomial time (1999).[7] ICC is also concerned with the practical realization of functional programming languages, language tools and type theory that can control the resource usage of programs in a formally verifiable sense.[8][9]
Two leading approaches to resource certification have been Static Analysis (SA) and Implicit Computational Complexity (ICC). SA is algorithmic in nature: it focuses on a broad programming language of choice, and seeks to determine by syntactic means whether given programs in that language are feasible. In contrast, ICC attempts to create from the outset specialized programming languages or methods that delineate a complexity class. Thus, SA's focus is on compile time, making no demand on the programmer; whereas ICC is a language-design discipline.[10]
Background and motivation
In classical complexity theory, a problem is assigned to a complexity class — such as P or PSPACE — by analysing the resource consumption (time, space) of a Turing machine that solves it. This analysis is extrinsic: the program and its complexity bound are separate objects, and the bound must be verified independently.
ICC seeks intrinsic characterisations: programming languages or logical systems in which every well-formed program is, by construction, feasible, and in which every feasible algorithm can be expressed. Such characterisations are of theoretical interest because they reveal the combinatorial or logical reasons why a complexity class has the extent it does, and of practical interest because they can serve as the foundation for type systems and program analysis tools that enforce resource bounds statically.
The field draws heavily on linear logic, introduced by Girard (1987),[11] in which the structural rules of weakening and contraction are controlled. Restricting these rules limits the ability of a program to duplicate or discard data, which in turn bounds the computational resources the program can consume.
Implicit Representations of polynomial time
The important complexity classes of polynomial-time decidable sets (the class P) and polynomial-time computable functions (FP) have received much attention and possess several implicit representations.
Cons-free programs
Jones[7][12] defined a programming language that can solve decision problems where the input is a binary string, and showed that a problem can be decided in this language if and only if it is in P. The language can be briefly described as follows. It receives the input as a list of bits. It has variables that can point to the list and change by application of the "tail" operation so they can move forward on the list. It can define recursive functions, but has no Higher-order functions. Crucially, it has no data-type constructors (hence the name cons-free): the input list is the one and only data structure throughout the program. The absence of constructors limits expressive power, preventing the construction of auxiliary data structures whose size could grow during computation. Remarkably, this restriction does not reduce the class of decidable problems to a proper subset of P: every problem in P can be decided within the language. Moreover, programs in the language may run in exponential time yet decide only polynomial-time problems, so the implicit characterisation is independent of the actual execution time of individual programs. Jones has also shown that if non-determinism is added to the language (as in a Nondeterministic Turing machine), the class of problems that can be accepted is still P.[12]
Function algebras with recursion on notation
Bellantoni and Cook[6] showed that a certain class of functions coincides with FP. These are functions which are defined, like the primitive recursive functions, by a set of base functions and operators for constructing new functions from existing ones. A special recursion scheme is used instead of the primitive recursion scheme, as will be seen below, and in addition, the functions have their arguments partitioned in two "sorts". This is denoted by separating arguments by a semicolon: . The arguments following the semicolon are called safe (a more intuitive name might be "protected"). When a value is passed in a safe position, it is not allowed to grow too much — note the difference between clauses (3) and (4) below. Another important difference to the definition of primitive recursive functions is that here the arguments are considered as binary strings and we can increase a value by adding a bit (x0 or x1) in contrast to the numeric successor function (x’).
Here is the list of basic functions:
- empty string: (a zero-ary function)
- projections: for each
- normal binary successors:
- bounded safe binary successors:
- binary predecessor:
- numerical predecessor:
- conditional:
- tally product: .
We can combine functions to form new ones using a composition scheme and a recursion scheme. Given , their predicative composition, is defined by Given , the predicative recursion on notation scheme defines a function by It is called "recursion on notation", because in each recursive call we strip a bit of the recursion argument, in contrast with recursion "on value" which goes from z to z-1.
Example. We define a function that receives a binary string x and returns a string of 0s of the same length as x. For readability we omit invocations of the projection functions which are technically necessary to retrieve function argument, e.g., to get x in function . Note how we need to initially keep a copy of x in order to be able to apply the bounded binary successor operator in the recursion.
Tiered recursion
Leivant[10] developed an alternative approach based on ‘‘data tiering’’: natural numbers (or other data) are stratified into tiers, and recursion is permitted only in ways that respect the tier order. Functions definable with tier-bounded recursion coincide with polynomial-time computable functions. Leivant’s framework is more syntactically flexible than the Bellantoni–Cook algebra and extends naturally to imperative languages.
Linear logic approaches
Girard, Scedrov, and Scott[4] introduced bounded linear logic, a variant of linear logic in which the use of the contraction rule is bounded by polynomial terms. Proofs in this system correspond, via the Curry–Howard correspondence, to polynomial-time computable functions. Subsequent work on light linear logic by Girard[13] and its type-theoretic development by Baillot and Terui,[14] and Lafont’s soft linear logic,[15] gave related systems with cleaner proof theory, all characterising polynomial-time computation via structural constraints on the exponential modality.
Other Classes
Implicit representations have been found for many complexity classes, including the hierarchy of time classes P, EXPTIME, 2-EXPTIME,… and the space classes L, PSPACE, EXPSPACE,…;[12] as well as the classes of the hierarchy DTIME(O(n)), DSPACE(O(n)), DTIME(), DSPACE(),…[16] For most classes, several alternative representations are known, which suggests that the classes have robust intrinsic descriptions rather than being artefacts of a single formalism.
The Grzegorczyk hierarchy of elementary and primitive recursive functions has also been studied from an ICC perspective: the hierarchy levels correspond to restrictions on the nesting depth of bounded recursion schemes.[17][18]
Applications
ICC results have been applied in several practical directions:
- Type-based resource analysis. Type systems inspired by light and soft linear logic have been implemented in functional languages to certify that well-typed programs run in polynomial time or in polynomial space.[9]
- Implicit complexity of real-number computation. ICC methods have been extended to characterise feasible computation over the real numbers, going beyond the discrete setting.[5]
- Security and information flow. The resource-bounding properties of ICC type systems have been adapted to control information flow in programs, with applications to language-based security.[19]