Introduction
In this article I would like make one thing as clear as a possibly can, that systems that can copy information automatically are very complicated systems. For any system that can copy information accurately and automatically to exist, some very complicated engineering problems need to have been solved. A system like that requires one or more designers with the capacity to think through the issues. A system like that could not have come about by chance alone.
So before I go any further I would like to define the word replication.
Replication: Making a copy of something.
So the word “replication” is just a fancy word for making a copy of something.
So automated database replication systems copy information from one database to another and do so automatically (i.e. Without needing a user to control it all the time).
I work as a computer programmer and I worked on a team that designed and wrote an automated database replication system that was used to copy information from a central site to sites in the Asia/Pacific region. These sites had to be up and running 24/7, so all changes (including software upgrades) had to occur with minimal disruption to the call centres that used the data. (At least that was the plan).
The Internet was not as fast or reliable as it is today, so that meant that the system had to work over slow dial-up connections and that made the task even more challenging. The type of database that we worked with was not as robust as ORACLE (a popular database system) and did not have it’s own built in replication or scheduling systems. But even back then when I would tell another programmer what I was doing, the reaction would as if they (even programmers) thought that what I was doing should be easy. It was not easy, I spent nine months in Sydney just installing and customising the system just for one client.
I hope to show you some of the main problems that must be dealt with using as non-technical a language as I can and then relate these issues to replication in living cells and show that living cells must have had a very Intelligent Designer.
So why is live automated replication difficult to achieve?
To outline it, here is a list of the issues followed by a more detailed explanation.
Automatic Scheduling
Correct Timing
Correct Sequence
Feedback - Resending Lost Information
Error detection
Disaster recovery – rebuilding entire database files
Live upgrading while ‘on-line’ 24/7
Automatic Scheduling
There are two words that I have come to dread as a programmer. One is ‘generic’ and the other is ‘automatic’.
There is no easy way that I can impress on you the load of tension that is packed into the word ‘automatic’ when I hear it in a context like ‘you must write a program that will do several tasks automatically’.
Software developers cop a lot of flack because of the tendency for projects to go overtime and over budget. A good way of sending projects overtime and over budget is insert the word ‘automatic’ into the design specification in several places.
To put this in every day terms, let’s just take something that is basic for a human being such as crossing a road safely. If you really want to appreciate what a major design achievement crossing a road is, then try building a robot that has the capacity to take the visual data from two cameras and use it to determine not just whether or not there is something on the road but where it is, how fast it is going and whether or not it is on a collision coarse with the robot.
If you can design a system that can just tell that there is something there at all, then you have done well.
I have not designed a system like that myself and I don’t plan to, it is sufficient for me to read books like one by robotics guru, Rodney Brooks, (Flesh and Machines: How Robots Will Change Us) to get a good idea of how hard that can be.
However the information copying system that I worked on did have to run automatically and that meant that it had to be able to make decisions and allow for eventualities all by itself without waiting for a user to type ‘Y’, ‘N’ or hit the ‘Enter’ key.
I could write a whole article on scheduling alone, in fact the scheduling software that we used for the replication system was a separate system in itself. Just making changes to the scheduling software to get to work with the replication software took a great deal of time. Anyhow to cut a long story short let me just say that for the replication system to work automatically it needed scheduling software that was complicated enough in its own right.
Correct Timing
One of the issues of the scheduling of a replication system involves getting the interval between replications right. The information had to be as up to date as possible but that had to be balanced with the fact that if the system replicated too often it would go over a threshold and become hopelessly overloaded. So the replication packets waiting to be delivered and applied would just keep building up and nothing would be replicated on time. That meant that there were two competing constraints (goals) and that the system had to be optimised to suit both constraints.
Correct Sequence
Information had to be copied to the remote sites in the correct order or the information would be potentially misleading. For example, suppose an operator in the call centre received an order for an item and that there were two items in stock, after the purchase there would be one item left. Now suppose another operator receives an order for the same item leaving no more items at all.
The information that is replicated would look like, ‘there is one item left’ and then ‘there are no items left’. But suppose that the information that there are no items left gets copied first and then the information that there is one item left gets copied after that. An operator at the overseas call centre will then check the availability of the item and think that there is one item left to sell! That’s not good! To have a useful replication system the system needs to keep track of what has been done and apply the changes in the correct order.
Feedback - Resending Lost Information
If the remote site tries to apply changes to the database and one of the packets of information has gone missing, the remote site then needs to be able to send a message back to host site asking for it to try sending the information again but once the information has been received the remote site can tell the host site that all is ok and that it no longer needs to keep its own copy of that information packet for that particular site.
Error detection
To be absolutely confident that our system was 100% accurate we needed to be able to compare the information on the host site with that on the remote sites. So our system also had a separate module that would compare all the remote sites with the host site and then ask the host to send any information that had been lost or that was inaccurate. That took one paragraph to describe in English but it took careful thought and planing along with many of lines of computer code (that took a lot of time to write) in order to get the error detection system working.
Disaster recovery – rebuilding entire database files
There were things that could go wrong with the system that were entirely out of our hands, such as a major file corruption caused by a power outage. In that case entire files of information had to be rebuilt. Again this is fairly easy to describe but much harder to do.
Live upgrading while ‘on-line’ 24/7
This was a nightmare. There were no scheduled downtimes because any downtime meant the call centre could not take calls and that profit would be lost. That meant that the software needed to be able to upgrade itself while it was running. At least that is how it was supposed to work, the occasional version mismatch with something not working correctly with something else would cause the replication system to come to a halt.
The Implications
In a few paragraphs I have outlined some of the issues that had to be dealt with and it almost sounds easy when I describe them like that. Let me assure you that there is a world of pain in some of those issues. Like getting after hours calls because the ‘system is down’. Like sitting in meetings that go for over an hour just trying to resolve problems like how to make sure that a call centre site gets only the information relevant to it and how the people in the head office get their sales information and how the warehouse ends up with the orders and nobody gets too much unneeded information. This article could become a book if I was to go into all the issues and spell out what makes these issues so difficult to solve. I have mentioned just a few of the issues, the kind of issues that can have a room full of IT professionals tearing their hair out for hours. As I said before when setting the correct interval between replications there were two competing requirements that needed to be considered. The system needed to be optimised to achieve the most efficient result. Optimisation is easy when there is only one requirement but when there are two competing requirements it starts to get a little tricky.
When there are pros and cons to weight up for various solutions to a problem, you can find yourself in one of those meetings arguing in circles. One thing is for sure; these problems are not dealt with except through careful thought and planning. Automatic information copying systems do not engineer themselves and the complex issues that go with systems like that are seldom solved by using the first idea that comes to mind let alone by blind chance. The main issue with an automatic system is that it has to be designed to make decisions for itself and since computers can’t think for themselves, the programmer has to do all the thinking ahead of time and build all of that logic into the system. Lack of foresight will result in bugs. For example, lets imagine that someone puts password protection on a database file, the replication software tries to access it but it is has not be designed to use a password, it will inadvertently get a ‘permission denied’ message. The programmer may well have thought it reasonable to assume that the replication software would always have access to this file so they don’t allow for it in when they write the program. The program can’t think for itself so it just sits there with a ‘permission denied’ message showing on the screen waiting for a user to fix the problem. The last thing you want when a system is automatic is for it to sit there waiting for a user to respond.
Having said all that, there may still be some people with programming experience who would think I’m making a mountain out of a molehill. All I would say to them is that today we can use solid database systems with built in replication that run on fast and reliable servers that communicate across fast and reasonably reliable networks but try telling all the programmers and engineers who made it possible for us to take all that for granted that their work is trivial and must have been very easy. If you can get them all to agree with that then let me know and I’ll rewrite this article accordingly.
The fact is that information systems are brittle, it does not take much to break them and a lot of work needs to be done before you can trust them.
So how brittle can it be? Image being in a call centre where about twenty operators are taking calls from customers and you are at a terminal and you hit one key at the wrong time and for the next hour every one of those operators will be telling potential customers that they are sorry but the computer is down and now you have to go a tell the IT manager why the system is down. That happened to me. My point is that information systems can be very easy to break, they are hard to design and they are not the result of chance.
Living Cells
So what has all that got to do with living cells?
The fact is that living cells are miniaturized information copying systems that run 100% automatically and are ‘online’ 24/7 and the system that I worked on is not even a toy in comparison. Imagine a computer that could take in resources from it’s environment (just the resources it needed and avoid things that are harmful), imagine if that computer could make an exact copy of it’s mother board, the CPU, the power supply, the hard drive (including all the files on it) stretching it’s case while all this was happening and then splitting into two computers that were both up and running at the end of all that. That would still not match the complexity of a living cell. To paraphrase Michael Denton from his book ‘Evolution a Theory in Crisis’, if you could photograph a living cell and blow the photograph up so that the photo stretched for several miles, you would still see minute detail in every part of the photograph. That is complexity that matches the complexity of a city. In Darwin’s Black Box, Michael Behe describes some of the irreducibly complex systems that make up a living cell and one of them is like a parcel delivery system that sends what is needed within a cell to the exact place where it is needed.
A cell has mind-blowing complexity and the known complexity just increases with each new discovery. The more you learn the more complex it gets. It blows my mind to learn that a cell caries out its own replication while it is still up and running. It has machines that unwind the strands of DNA, machines that copy those strands and the same machine can check for it’s own mistakes and correct them (only missing a very tiny percentage of the mistakes). The membrane of cell nucleus dissolves during replication and the two pairs of chromosomes of DNA are split apart by machines that look like rods that push them into the areas what will become the daughter cells.
All this has critical timing; you don’t want to push the chromosomes apart before they are ready. In fact timing is everything throughout the entire process.
When I see a DVD like ‘Unlocking the Mystery of Life’ that illustrates some of these incredibly complex processes using animation, it yells ‘DESIGN’ at me. If that is not enough, the cell has it’s own disaster recovery system. When DNA is damaged, machines in the cell detect the damage and repair it.
This all happens automatically. There’s that word that give me nightmares when I have to write software but when I’m looking at a living cell the same word makes me awestruck.
Every single engineering problem that an automated information replication system can face has either been solved or bypassed in the design of a living cell.
Is the fact that I can write automatic replication software proof that this can happen as the result of natural processes alone?
My work has only reinforced my belief that God must have designed life.
My work is a toy in comparison and there is no way that a system like the one that I wrote could just happen. The model best fits the intelligent design model because the system I wrote required planning and thought, lack of planning resulted in bugs. A living cell is a vastly superior design that is 100% automatic. The fact that it is so difficult to make live upgrades to a working system only heightens my scepticism that a series of lucky copying mistakes can somehow avoid ‘version incompatibility’ issues and ultimately add new information or function to the system. Especially when the systems are so complex and interdependent.
Now for the knock out blow. How do evolutionists explain systems that are as complex as this? They say that they evolved not in one hit but gradually adding complexity over a long period of time. There is one huge problem with that explanation and that is we are talking about replication systems. An important part of the theory of evolution is that there is a chain of descent between parents and their offspring.
To inherit genes from you parents, those genes must be copied (or replicated). For Darwin to be right you need a fully automatic information replication system in place and it has to be there right from the start and it would need to do its copying to be reasonably accurate to avoid something called ‘error catastrophe’, (‘error catastrophe’ would cause a replicating cell to be overloaded by too many mutations, the result would be certain extinction). Looking at this from the point of view of someone who has a ‘hands on’ feel for some of the design issues involved in a comparatively simple automated replication system, I’d have to say there is no way the first living cell could be a lucky accident. As far as I’m concerned all bets are off.
Now while some materialistic naturalists would say that I am completely ignoring the possibility that the first self-replicating life could have been much simpler, my response would be ‘prove it’. While crystals are not complex and they can replicate their own structure, they can’t replicate what William Dembski calls ‘complex specified information’.
The simplest living cell that can live independent of a host is many times more complex than crystals and it can replicate complex specified information because it’s replication system uses DNA.
My article obviously cannot cover all the aspects of that topic so I urge interested readers to check out this FAQ page on the subject of the origin of life.
How would they respond?
So how would qualified biologists (at least one that accepts the theory of evolution) respond to this article? Well first of all they would seek to undermine my authority to talk on this issue. My “hands on” experience of co-designing and maintaining replication systems would count for little in their view. Well my response to that is that there are some very qualified scientists who believe that an Intelligent Designer agent is a best explanation for the complexity that we see in living things. The list includes Michael Behe (microbiologist), William Dembski (Mathematics), Dr Don Batten (Agriculture) and many more. So by all means don’t take my word for it, please check out what these people have to say. Another response that an evolutionist would likely put forward is (to roughly paraphrase Richard Dawkins) that the argument from personal incredulity (extreme feeling of scepticism) is not scientific proof; just because something does not seem likely does not prove that it can’t happen. My response is, it does not prove that it can happen either. In fact the more I learn about replication and biology, the more incredulous I become. My personal incredulity increases as I learn more about this topic. They are yet to prove that a self-replicating system that contains complex specified information can arise by chance. For materialists to say that they are yet to find out how it happened and that it will just take more research is special pleading, they have just assumed that God is not the explanation and ruled Him out right at the start, they accuse Creationists of giving up on science yet they ignore the fact that they have done exactly the same thing by giving up on God. To say that there must be some as yet discovered explanation just demonstrates great faith in naturalism, trivialises the complexity of microbiology and ignores the difficulty in resolving design issues that are encountered in the replication of information. Some materialist would mention the time span and suggest that evolution has billions of years to do what takes engineers months. I’m incredulous about a cow jumping over the moon, I don’t care how many hypothetical cows you have on whatever hypothetical number of planets, it is not going to happen in the longest estimates for the age of the Universe, I’m convinced that a naturalistic origin of a self replicating cell has a similar level of improbability.
It is scientific to look at the scene of the crime and determine that it was not an accident and that an individual who had the capacity to think is responsible for what happened. In the same way William Dembski and Werner Gitt have both shown that where complex information is present then ultimately an intelligent agent is responsible for it’s existence.
What I have learned through my ‘hands on’ work with automated replication systems only backs up what some very qualified people have been saying all along.
Alex Williams has written several articles about the complexity of DNA and there are links to those articles from the link that I given. If you have any interest in IT or engineering then I urge you to learn about the complexity of the design in living cells and while you do, imagine how difficult it would be to engineer a computer system that works like that.
Conclusion
Every time I read something new about DNA or the complexity of a living cell, it just seems to get all the more complex and much more unlikely that it could have started by chance.
So when I read books like Michael Behe’s book “Darwin’s Black Box” or various articles on CMI’s website or watch videos like Unlocking the Mystery of Life DVD, I find myself having an extremely high degree of “personal incredulity” that life originated by natural processes without the aid of an Intelligent Designer.
I seriously doubt that they will ever find a solid naturalistic explanation for life’s origin, they have had many guesses and some of those guesses were based on elaborate experiments (designed and operated by intelligent chemists) but even so, a guess is not a proof. As far as I’m concerned they never will be able to prove scientifically that an automated self-replicating life form came about by chance, my hands-on experience with automated self-replicating data systems has raised my level of ‘personal incredulity’ to a level that puts the idea of that well and truly off the radar as far as I’m concerned.
Let me say that again in plainer language, my work in information technology has made me very sceptical that something many times more complicated could happen as a result of a lucky accident.
As far as I’m concerned life is the result of an Intelligent Designer and as far as I’m concerned the best candidate for that role is revealed in the person of Jesus. See Lee Strobel's website for online videos that present the evidence that Jesus was a real person and that He is our God and Creator.
Please follow this link if you would like to know how to become a Christian
Tuesday, January 1, 2008
Subscribe to:
Posts (Atom)