Many attempts to define determinism, the philosophical notion that everything that happens in the universe is pre-ordained or pre-decided, involve the notion of causality. A causal chain or graph of events, driven by the laws of physics, is supposed to explain how determinism can be. In this view, the state of the universe at any time is determined once we have an initial state of the universe and a set of physical laws which allow us to compute the state of the universe at any time. Of course, since we are also part of the universe and are thus subject to its laws, some thinkers construct these arguments from the viewpoint of a hypothetical “demon” residing outside the universe and unaffected by the universe’s laws.
In what follows, I will argue that the most common notion of causality, based on counterfactual outcomes, is meaningless in a deterministic universe. We may have to adopt a definition of causality which relies on computability within the universe: A causes B if we can start with state A and compute a sequence of state changes induced by the laws of the universe, ending in B.
Counterfactual Causality Fails in a Deterministic Universe
According to the Wikipedia entry on determinism:
Causal (or nomological) determinism is the thesis that future events are necessitated by past and present events combined with the laws of nature.
The Wikipedia entry on Causality has this to say:
The philosopher David Lewis notably suggested that all statements about causality can be understood as counterfactual statements. So, for instance, the statement that John’s smoking caused his premature death is equivalent to saying that had John not smoked he would not have prematurely died.
The incompatibility between determinism and causality is now easy to see: if causality is defined counterfactually, then any event A which occurs before an event B is causally responsible for B. This is because the statement “If A had not occurred, then B would not have occurred” is meaningless in a deterministic universe. “If A had not occurred” is like saying “If 1 equals 2”, because determinism says that A occurring is the only possibility. Thus, if A occurs before B, then A is causally responsible for B.
Causality as Computation
Perhaps a modified definition of causality will help take care of this problem. Suppose that, by “A causes B”, we mean that a computer within the universe is able to find a chain of applications of the laws of the universe which takes the universe from state A to state B (via some sequence of intermediate events). Then we can say that A causes B. Note that this definition refers to the ability to compute or the ability to understand.
The definition is not yet valid, however. What if, given any two events A and B, we can compute such a sequence of intermediate events? Then this definition would be no more useful than the previous one based on counterfactuals. We may have to abandon an attempt to define causality as either true or false (A causes B or A does not cause B) and accept a definition based on degrees of causality. Thus, if the chain of intermediate events going from A to B is long, we say the relationship is “less causal”, and if it is short, we say it is “more causal”.
What is Ethics? What is the foundation for ethics? Do we need religion for ethics? Can a mechanical (soulless, purely physics-driven) being have ethics? How can ethics be derived in a deterministic universe without free will? The Optimization viewpoint. Can there be an Ultimate Logical Justification for any system of ethics?
Ethics Without Soul
There has recently been a lot of controversy about Atheist ethics. Ethical systems have, traditionally, been tied to religion. Since religions became widespread, the primary motivation for ethical behaviour has been religious. Each religion has its own ethical system. Almost all religions specify carrot-and-stick reasons for behaving ethically. In the Abrahamic religions, heaven and hell are the carrot and the stick. In Hinduism, nirvana and demotion in the “highness” of being are the carrot and the stick. Not all religions insist on the existence of one universal “God”, but Atheists often remain unattached to any of the usual religions in addition to a lack of belief in a “God”. The question then arises: Can Atheists behave ethically?
More generally, the question can be posed for any mechanistic system (a system ruled only by the laws of physics and not by any agent, such as a “soul”, connected to religion). Mechanistic systems include humans, other organisms, robots, and any other objects or phenomena. (Whether humans are mechanistic is a subject of much debate; see Strong and Weak Artificial Intelligence and Gödel, Penrose and Artificial Intelligence — Simplified.) What does ethics mean for a mechanistic system?
The Goal of Ethics
I think ethics can be viewed as a mechanism for preservation or proliferation of complexity. Complexity is precious; the entropy grindstone is constantly trying to destroy it (the second law of thermodynamics). Every ethical principle we have can be seen as ultimately for complexity. Here are some examples.
For example, we prize human life over that of all other animals. This is consistent with complexity preservation: humans are more complex than other animals. We think killing an animal for no reason is unethical; we feel no such thing about smashing a rock. This is also consistent with complexity preservation: an animal is more complex than a rock.
A lot of things are not directly connected to complexity preservation, but come about because we need simple rules of thumb that we can follow easily. Lying is considered unethical. In the long term, this helps preserve social order and thus helps preserve the human species.
Thus mechanistic systems can have ethical behaviour – behaviour which eventually tends to preserve or increase complexity. Atheists can be as ethical as anyone else, as can a robot, as long as their actions are directed towards optimizing complexity.
Thus we have converted the problem of constructing ethical systems to an optimization problem. The objective function (which we are trying to maximize) is overall complexity. Ethics can now be viewed as rules of behaviour following whom tends to increase complexity.
Our Ethical Principles
So this tells us what ethics is about, and what ethics aims to do. But it still doesn’t tell us how a mechanistic individual should develop his/her sense of ethics. A person can hardly be expected to think of some far-off big-picture complexity goal when deciding what constitutes good ethics. How can the above definition be made practical?
First, by recognizing what the eventual goal of ethics is, we have converted the construction of ethical principles into an optimization problem. This is a good first step, since we now know what it is we are trying to do when we talk about acting ethically.
Our solution to the optimization problem does not always rely on the objective function of complexity, but rather relies on the observation that various human institutions (societies, religions, legal systems) have already come up with rules of thumb for this optimization. Once we recognize this, we use our judgment to decide which of the existing rules are relevant to overall preservation of complexity and adopt an ethical system based on these rules. This solution may not be perfect, but it is more important that the ethical rules be easy to remember and follow – what use is a perfect but unintelligible and impractical rule? It is preferable, I think, to find simple and general rules, and avoid special cases and exceptions as much as possible.
What’s more, once we recognize this as a valid scheme for the generation of ethical principles, we can free ourselves from the past. Faced with a new situation, we can find ethical rules tailored to the new situation, rather than trying to search for rules buried in existing religious systems that are applicable. A religious system may be able to help, but the effort of trying to reconcile religion with the new situation is often not worth it.
Time and space are modes in which we think and not conditions in which we live — Einstein
We have a specific way of perceiving things. For example, our mind perceives the world through a four-dimensional model: 3 spatial dimensions, and one (unidirectional) time dimension. But is this the only way the world around us can be perceived?
It is clear that, as long as there is a one-to-one mapping between one representation and another, any two representations of any piece of information are equivalent. For example, it does not matter whether we store a position in polar or Cartesian coordinates – because we have a one-to-one map from one to the other.
So, imagine that we meet an alien species. Would they necessarily have a unit of distance? Could it be that, instead of (x, y, z, t), they perceive (tx, ty, tz, t^3)? Their unit of measurement would then have distance and time entangled together. They might say, “walk for 125 cube-seconds” (equivalent to us saying “walk for 5 seconds”). Our statement “the car is 10 kilometres away and the time now is 125 seconds” would translate to “the car is 50 km-seconds away”. Is there a logical reason why every species should perceive in the same units that we do? Maybe not!
This needn’t be restricted just to distance and time. A species might perceive taste and colour together, or even distance confounded with emotional state. “That’s red-sweet, my friend, but it’s happy-far!”
What’s the Next Big Thing in technology going to be? By big, I mean something revolutionary – like the World Wide Web, or at least (a little lower on the rungs) like social networking.
I think it’s going to be integration of electronic devices into the human body. We already have scientists working on:
I think this trend will continue, and within a few decades human-computer hybrids will be widespread.
What then? The scary thing about this is that it will mean the rich are suddenly fundamentally superior. The biological randomness that levels the playing field somewhat, say by making a poor person smart or strong, will be lost. Those who are born with the most money can be the smartest and the strongest. Which means they’ll make even more money. Which will let them buy even more hardware to become even stronger and smarter. And so on.
Will those who are poor at the start of this race be doomed?
In Darwinia, an inexplicable event abruptly replaces the whole of Europe with a counterpart from an alternate history, a wilderness that has experienced a parallel evolution and is complete with its own strange flora and fauna. The adventures of Guilford Law, an American photographer who accompanies a pseudo-scientific expedition into the heart of the new continent, make up most of the book.
I read Darwinia right after The Forge of God (Greg Bear), and the free flow of Wilson’s prose was a relief after Bear’s strained efforts. Wilson is eloquent and well-informed, able to spin an interesting story and engross the reader in even the simple details of the plant and animal life of Darwinia, as the new continent comes to be named. A variety of interesting philosphical questions are posed and discussed throughout the book, but it is not clear that Wilson has considered the questions he poses carefully enough. (Example: One of Wilson’s characters is quick to criticize the Hindu/Buddhist notion of renunciation, evidently without much conception of what it means.) The explanation for the change is interesting as well.
This book, however, has one glaring flaw. The problem lies with Wilson’s apparent indecisiveness about what this novel was going to be. It started off as a jungle adventure story, like an expedition in the Amazon forest. In this phase, it is hardly a science fiction story, more a rollicking adventure tale. Midway, it completely shifts gear, turning into a real science fiction story, but this completely kills the built-up mood of the first half. Suddenly, a colonial-era adventure tale turns into a galaxy-scale war between artificial intelligences and digitized naturals in galaxy-sized defenders against entropy called noospheres. The story is still interesting, but simply loses the gripping richness of the first half.
Nevertheless, the book is worth reading. It didn’t cease to be enjoyable at any point.
The Forge of God by Greg Bear is an interesting story about invasion by two alien species — by proxy. The aliens send robots (intelligent self-replicating Von Neumann machines) to eat the world or protect selected humans.
While the premise is interesting and Bear has his moments, the problem with this book is that most of it is rather poor filling covering a lack of plot. Many writers have used this format successfully, notably Arthur C. Clarke. But to do so the author needs a talent for evocative writing and inventive prose. Bear’s prose doesn’t quite match up. His writing is strained. Bear’s idea of characterization is to write a couple of paragraphs describing each character’s clothes and job immediately after the character is introduced. He strains to find phrases that are evocative. He writes out his characters’ wistful thoughts in painstaking detail, but they don’t convey the right flavour. Bear’s attempts at evocative phrases (such as the eponymous “Forge of God”) are mechanical and unevocative. In short, Bear tried to build a novel out of a very simple idea, no plot, and uninteresting prose.
So, are his ideas novel? As I read this book, many of the ideas it contains are already pretty well-used in the science and sci-fi communities. Von Neumann machines have been known for a long time and are not novel. Bear creates mysteries at various points in the book, but each mystery simply peters out and the solution is gratuitously provided in the narrative. No resolution by the characters in the book.
In the end, the idea of the novel is not bad, and even the dogged mechanical prose succeeds in telling a moderately interesting story. One just wishes it was developed better.