Theoretical computer science

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Theoretical computer science is the collection of topics of computer science that focuses on the more abstract, logical and mathematical aspects of computing, such as the theory of computation, analysis of algorithms, and semantics of programming languages. Although not itself a single topic, its practitioners form a distinct subgroup within computer science researchers.

Contents

[edit] Scope

It is not easy to circumscribe the theory areas precisely; the ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT), which describes its mission as the promotion of theoretical computer science, says:[1]

The field of theoretical computer science is interpreted broadly so as to include algorithms, data structures, computational complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.

Even so, the "theory people" in computer science self-identify as different. Some characterize themselves as doing the "'science' underlying the field of computing"[2], although this neglects the experimental science done in non-theoretical areas such as software system research.

 P \rightarrow Q \,
Mathematical logic Automata theory Number theory Graph theory
\Gamma\vdash x : Int
Type theory Category theory Computational geometry Quantum computing theory

[edit] History

While formal algorithms have existed for millennia (Euclid's algorithm for determining the greatest common divisor of two numbers is still used in computation), it was not until 1936 that Alan Turing, Alonzo Church and Stephen Kleene formalized the definition of an algorithm in terms of computation. Similarly, while binary and logical systems of mathematics have long existed, Gottfried Leibniz only formalized logic in 1703 with binary values for true and false. The nature of mathematical proof also has an ancient history, but in 1931 Kurt Gödel proved with his incompleteness theorem that there were fundamental limitations on what statements, even if true, could be proved.

These developments have led to the modern study of logic and computability, and indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 theory of the statistical mechanics of information by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established.

With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render all modern public key cryptography systems uselessly insecure.

Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed.

[edit] Organizations

[edit] Journals and newsletters

[edit] Conferences

[edit] See also

[edit] Notes

  1. ^ "SIGACT". http://sigact.acm.org. Retrieved on 2009-03-29. 
  2. ^ "Challenges for Theoretical Computer Science: Theory as the Scientific Foundation of Computing". http://www.research.att.com/%7Edsj/nsflist.html#Intro. Retrieved on 2009-03-29. 

[edit] External links

Personal tools