Skip to main content

Understanding Turing Machine through DNA Computation-1

 

To perceive the Turing Machine through the application of DNA computation, one does not particularly need to know what either is; it is just a way to merge the theory of computation and biology. However, it is essential to note the definitions and structures of each. 

A Turing Machine, as technical as it may sound, is quite a fundamental way to determine 'what it means for a problem to be solvable'. Now, it is just not any problem; of course, they are the mathematical ones and the ones defined over computable functions. Let's step back to know when the Turing Machine came into being. Alan Turing, the English mathematician responsible for the machine's nomenclature, conceived the idea of a theoretical model that provides a universal algorithm to approach problems. This idea did not come to him in a dream, neither did he get an immediate sixth sense, rather it was his intuition to solve the 'Entscheidungsproblem'. 

The six-syllable term is known as a 'decision-problem' posed by David Hilbert and Wilhelm Ackermann in 1928. This problem asks for an algorithm that can determine if a given statement in first-order logic is universally provable. Turing's 1936 paper titled 'On Computable Numbers, with an Application to the 'Entscheidungsproblem' marked the existence of the Turing Machine. Turing described his machine to be a 'simple abstract computing device'. It is 'simple' because it can be described with three components and 'abstract' because it only works in theory. 

For those who are familiar with Finite Automata in the Theory of Computation, the Turing Machine is described through 7 tuples, and three of them are of interest to us. For those who are reading out of intrigue (a great habit), the Turing machine has a collection of 7 elements which make the machine work, like alphabets in the English liaison. Since we are moving around concepts in computer science, it is inevitable to get technical. 

(Q, ∑, T, q0, F, B, δ) are the seven tuples, where, 

Q: The finite set of states

∑: the finite set of input symbols

T: the tape symbol

q0: the initial state

F: a set of final states

B: a blank symbol used as an end marker for input

δ: a transition or mapping function.

Why did I say only three of these stand out? Because {Q, Σ, q0, F, δ}are also shared by Finite Automata (the simplest machine to recognize patterns). But then one might say, only {B} and {T} are different, what is the third one?  The transition function. This function creates a mapping such that we receive a triple that includes the next state, external symbol, and the direction for moving the tape head. 
I know all this seems like talk in the air, so let us put it in a visual - 


We know this machine to be of a deterministic kind, i.e., it works sequentially. Should we care to know about the non-deterministic kind? Why not! Since everything in reality works in no universal known order, it would not harm to dip our toes in its concept. A non-deterministic Turing Machine (NTM) works in a stochastic manner, opposite to a DTM. 

NTM is known to solve non-polynomial time (NP) problems, i.e., problems whose solutions follow an exponential non-linear pattern. The Hamiltonian Path Problem is an example of the NP problem. (We will be touching in this again very soon).
Why are we learning about NTMs and NP problems, and where is the DNA? Well, I am very glad to state that a DNA-based Turing machine simulates the behavior of an NTM. This means we get to see the practice of the theory. 

To construct a Turing machine, we require a language, markers, reject and accept states, and directions for reading the tape (left or right). Let us run a simulation to accept the input 'aba'.
Firstly, we write each of the symbols in a separate cell of the input state:


The tape will read the characters up to Δ characters. If the tape has read the desired string, the Turing machine will halt after reading and will reach a halt or an acceptance stage.


At the start, we have q0 as the initial state, and the head points to ‘a’. We will apply the following transition function to move around- (q0, a) = (q1, A, R). This means we will read ‘a’ in q0 and move right, denoted by R, to q1 while replacing ‘a’ with ‘A’.

Now, the head points are on ‘b.’ Similar to the previous cell, we will apply (q1, b) = (q2, B, R). Then we move on to q3.
Now, we apply (q2, a) = (q3, A, R) 

The machine has reached the character through (q3, Δ) = (q4, Δ ,S) 
The next piece of visual is specifically crucial to implement the Turing Machine in DNA computation. It is a transition table showing the above process.


This sums up the construction and basic definition of a Turing Machine that will suffice for application in DNA computation. 

Comments

Anonymous said…
Wow! Great article.

Popular posts from this blog

Understanding Turing Machine through DNA Computation-2

Following our last expedition of  Turing Machine, let us now see it apply to DNA computing.  The information in the DNA is represented using the four-character genetic alphabet- A [adenine], G [guanine], C [cytosine], and T [thymine]. These four are known as bases and are linked through deoxyribose. This connection between the bases follows a direction such that the lock-and-key principle is achieved -  (A)(T) and (C)(G) This means Adenine (A) pairs with Thymine (T), and Cytosine (C) pairs with Guanine (G). These bases make DNA a computational medium. Drawing the analogy from traditional computers to DNA computing, we see that while the former process formation sequentially, DNA computing allows for parallelism, the ability to perform many computations simultaneously by leveraging the vast number of DNA molecules. This significantly speeds up the process.  DNA computing uses biochemical reactions and DNA molecules instead of silicon-based computing like conven...

Measure Theory with respect to Soap Bubbles - 2

Since last time, we now know the crux of Measure Theory. So, let us proceed to our minimal surface. Its definition is derived quite a bit from its name: a surface with mean curvature as zero and which locally minimizes the area. Minimal surfaces solve the problem of minimizing the surface area in each class of surfaces. For example, in physical problems, a soap film spanning a wire frame naturally forms a minimal surface, as it minimizes the surface area for the given boundary. We went over something called ' countable additivity ' and though its implication remains the same, our application of it changes from a coordinate plane to a sphere (a soap bubble). The advantage of a sphere being Lebesgue measurable is that it can be broken down into smaller regions or sets such that countable additivity holds. One such way to think about it is to decompose the surface into two disjoint hemispheres.  Moreover, this can be applied to multiple connected spheres or in our case, bubb...

Anatomy of Our Bayesian Brain

When I ask, how many of you believed in Santa Claus or the tooth fairy for the larger part of your lives, I already know it will be all of you because as kids your malleable brains were still developing to assess the evidence. Here is when Reverend Thomas Bayes comes in with his ‘Bayes’ Theorem’.  Bayes Theorem While his theorem remained unpublished during his lifetime, it became greatly beneficial in many areas. Bayes' theorem states that the probability of A given B is the same as the probability of B given A times the probability of A, divided by the probability of B. It sounds a bit mouthful but it allows us to determine the chance of certain events happening when we know the probabilities of other related events beforehand.  Ever wondered how the weather is forecasted? By using the Bayes theorem. To understand the formula better, let us get our hands dirty by doing some math. What is the first sign you would look for it to rain? The clouds. So, to find the probability of ...