Full description not available
J**R
Excellent popular introduction to the risks of artificial general intelligence
As a member of that crusty generation who began programming mainframe computers with punch cards in the 1960s, the phrase “artificial intelligence” evokes an almost visceral response of scepticism. Since its origin in the 1950s, the field has been a hotbed of wildly over-optimistic enthusiasts, predictions of breakthroughs which never happened, and some outright confidence men preying on investors and institutions making research grants. John McCarthy, who organised the first international conference on artificial intelligence (a term he coined), predicted at the time that computers would achieve human-level general intelligence within six months of concerted research toward that goal. In 1970 Marvin Minsky said “In from three to eight years we will have a machine with the general intelligence of an average human being.” And these were serious scientists and pioneers of the field; the charlatans and hucksters were even more absurd in their predictions.And yet, and yet…. The exponential growth in computing power available at constant cost has allowed us to “brute force” numerous problems once considered within the domain of artificial intelligence. Optical character recognition (machine reading), language translation, voice recognition, natural language query, facial recognition, chess playing at the grandmaster level, and self-driving automobiles were all once thought to be things a computer could never do unless it vaulted to the level of human intelligence, yet now most have become commonplace or are on the way to becoming so. Might we, in the foreseeable future, be able to brute force human-level general intelligence?Let's step back and define some terms. “Artificial General Intelligence” (AGI) means a machine with intelligence comparable to that of a human across all of the domains of human intelligence (and not limited, say, to playing chess or driving a vehicle), with self-awareness and the ability to learn from mistakes and improve its performance. It need not be embodied in a robot form (although some argue it would have to be to achieve human-level performance), but could certainly pass the Turing test: a human communicating with it over whatever channels of communication are available (in the original formulation of the test, a text-only teleprinter) would not be able to determine whether he or she were communicating with a machine or another human. “Artificial Super Intelligence” (ASI) denotes a machine whose intelligence exceeds that of the most intelligent human. Since a self-aware intelligent machine will be able to modify its own programming, with immediate effect, as opposed to biological organisms which must rely upon the achingly slow mechanism of evolution, an AGI might evolve into an ASI in an eyeblink: arriving at intelligence a million times or more greater than that of any human, a process which I. J. Good called an “intelligence explosion”.What will it be like when, for the first time in the history of our species, we share the planet with an intelligence greater than our own? History is less than encouraging. All members of genus Homo which were less intelligent than modern humans (inferring from cranial capacity and artifacts, although one can argue about Neanderthals) are extinct. Will that be the fate of our species once we create a super intelligence? This book presents the case that not only will the construction of an ASI be the final invention we need to make, since it will be able to anticipate anything we might invent long before we can ourselves, but also our final invention because we won't be around to make any more.What will be the motivations of a machine a million times more intelligent than a human? Could humans understand such motivations any more than brewer's yeast could understand ours? As Eliezer Yudkowsky observed, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Indeed, when humans plan to construct a building, do they take into account the wishes of bacteria in soil upon which the structure will be built? The gap between humans and ASI will be as great. The consequences of creating ASI may extend far beyond the Earth. A super intelligence may decide to propagate itself throughout the galaxy and even beyond: with immortality and the ability to create perfect copies of itself, even travelling at a fraction of the speed of light it could spread itself into all viable habitats in the galaxy in a few hundreds of millions of years—a small fraction of the billions of years life has existed on Earth. Perhaps ASI probes from other extinct biological civilisations foolish enough to build them are already headed our way.People are presently working toward achieving AGI. Some are in the academic and commercial spheres, with their work reasonably transparent and reported in public venues. Others are “stealth companies” or divisions within companies (does anybody doubt that Google's achieving an AGI level of understanding of the information it Hoovers up from the Web wouldn't be a overwhelming competitive advantage?). Still others are funded by government agencies or operate within the black world: certainly players such as NSA dream of being able to understand all of the information they intercept and cross-correlate it. There is a powerful “first mover” advantage in developing AGI and ASI. The first who obtains it will be able to exploit its capability against those who haven't yet achieved it. Consequently, notwithstanding the worries about loss of control of the technology, players will be motivated to support its development for fear their adversaries might get there first.This is a well-researched and extensively documented examination of the state of artificial intelligence and assessment of its risks. There are extensive end notes including references to documents on the Web which, in the Kindle edition, are linked directly to their sources. There are a few goofs, as you might expect for a documentary film maker writing about technology (“Newton's second law of thermodynamics”), but nothing which invalidates the argument made herein.I find myself oddly ambivalent about the whole thing. When I hear “artificial intelligence” what flashes through my mind remains that dielectric material I step in when I'm insufficiently vigilant crossing pastures in Switzerland. Yet with the pure increase in computing power, many things previously considered AI have been achieved, so it's not implausible that, should this exponential increase continue, human-level machine intelligence will be achieved either through massive computing power applied to cognitive algorithms or direct emulation of the structure of the human brain. If and when that happens, it is difficult to see why an “intelligence explosion” will not occur. And once that happens, humans will be faced with an intelligence that dwarfs that of their entire species; which will have already penetrated every last corner of its infrastructure; read every word available online written by every human; and which will deal with its human interlocutors after gaming trillions of scenarios on cloud computing resources it has co-opted.And still we advance the cause of artificial intelligence every day. Sleep well.
D**N
Won't do
Anyone who has indoor plants has no doubt run into the problem of proper lighting, with the need sometimes to use artificial lighting. There are several ways in which this might be done, depending on the imagination of the plant lover: 1. One method is to put the plant under a lamp, which is then turned on and off by the plant lover. 2. For those who do not want to remember to turn the lamp on and off, there are devices on the market whose timing can be set by the user to turn a lamp on and off. The actual time that the lamp is on can be set in these devices, according to recommendations given by a plant expert of botanist.3. Suppose now that this device was modified so as to contain information about the lighting needs of the plant, an Aphelandra squarrosa for example, and that the device was able to turn on and off and vary its lighting intensity based on the judgements of a plant expert. Suppose also that the device is able to compare the efficacy of its "light curve" on the health of the Aphelandra with others grown under light controlled by a device of the same kind. The actual comparison is done under the instigation of the plant lover, and the device can then change its light curve based on the results of the comparison.4. Suppose that the device is further modified so that it can make the comparison itself, namely it judges whether the difference in light curves on the health of the plants is significant and then alters its own light curve appropriately. Its judgements are taken independent of the plant lover or plant expert, and are based on historical or experimental data it has access to.5. As a further modification to the device, suppose it can now formulate a set of hypotheses that explain the effects of this type of artificial light generation on Aphelandra squarrosa. The device generates these hypotheses and formulates theories based on the instigation of the plant lover. For example, the plant lover may want to know how the health of the Aphelandra would be affected by changing the lighting conditions, without having to do the testing herself. The device can also formulate light requirements for plants other than Aphelandra squarrosa.6. Suppose a further modification gives a device that can use the information on light curves of plants to understand the effects of light on other physical entities. The device can find common elements of behavior in the response of plants to light and the response of these other entities to light and formulate a set of hypotheses based on these elements. The device attempts to formulate these hypotheses based on the instigation of an interested human party. A typical plant lover would probably not want this kind of information, but a scientist or botanist might. The device would probably be too impractical to a typical plant lover and its additional ability therefore useless for general home use.7. The device is further modified so that it is curious about the effects of light on entities, whether these entities are plants or something else. It tries to formulate theories on its own, independent of any external interested party. Such a device might be able to formulate procedures, based on genetic engineering, for altering the biochemistry of Aphelandra squarrosa, so as to make it more resilient as a houseplant, possibly needing less light or a radically different light curve.8. The device is modified so as to be able to self-manage itself, such as its power requirements. In addition, it can send a set of instructions to a manufacturing facility that will manufacture copies of itself, or it might recommend its own design be altered and then manufactured, with recommendations being based on designs it generated.It might be fair to say that these eight types of devices are very different, qualitatively speaking. The first type of device is incapable of solving problems but is more of a simple switch. The second type of device represents a machine that can find answers to domain-specific problems but does not compare these answers to any standards. Machines of this type do not attempt to check their answers or correct them. The third device represents machines that find answers to domain-specific problems and check their answers to these problems according to standards that are given to the machine from an external source or standard. The fourth device represents a machine that is able to check its answer to domain-specific problems and make judgments as to the quality of these answers, and do so independently of any external standards.The fifth type of device represents machines that are able to judge the quality of their answers to domain-specific problems and then propose theories or explanations that subsume these problems, whereas the sixth type of device is able to solve problems having their origin in more than one domain, but their attempt takes place only under the instigation of an external inquirer. The seventh type of device expresses curiosity and creativity, can solve problems independently without any external instigations, and can develop theories of explanations around these problems. Finally, the eighth type of device represents machines that can self-manage and self-replicate,and have all the abilities of machines of the seventh type.In analogy with human reasoning one might argue that as one goes from the first type to the last the intelligence increases. But if one insisted upon a quantitative measure of just how much "smarter" the last type of device is than the first, then this would be difficult, since no such measure has yet been devised in the field of artificial/machine intelligence. And the lack of such a measure is the predominant reason why the thesis of this book is problematic and needs to be rejected. There are many places in the book where the the author speaks of "super intelligent" machines as being a thousand or a trillions of times more intelligent than humans, but no where in the book is there any discussion of how this is to be determined. The author does refer to machines taking IQ tests, and the reader is evidently supposed to surmise that it is the use of these tests that will enable one to determine the time when a machine "could match and then surpass human intelligence." No where in the book though is an example given of a machine, either existing or projected into the future, that has taken one or more IQ tests and therefore shown to be "intelligent" to the degree to which these types of tests measure intelligence (if indeed they do). This is also an indication of the great need for the field of artificial intelligence for a rigorous "theory of intelligence" that would allow researchers and engineers to assess more quantitatively the difference between what is called AGI (artificial general intelligence), and domain-specific intelligence.Again, qualitatively speaking, one could argue that there are many machines today that exhibit domain-specific intelligence, such as those able to play chess and backgammon, perform financial analysis and trading, regulate and troubleshoot communication networks, and find interesting patterns in genome data. These are just a few examples, and apparently the author wants to base his case for what he believes will be "super intelligent" machines on the proliferation of these types of machines in everyday life, as indeed they are. It is true that are lives are dependent on the output of these machines, such as credit scores, financial trading, medical diagnostics, etc. It is quite a stretch though to argue that this massive proliferation of domains-specific reasoning machines will result in machines that can reason over many domains (AGI) without substantial re-writing of their "brains". The author is clearly fearful that this will occur, but he has given no absolutely no hint on how this is do be done.Instead, the author relies on the opinions of experts who work in the field of artificial intelligence, and also gives figures on the funding levels of research in AGI. If one checks the reality of this funding, there are certain instances where one can verify the figures, but to say as the author does that "billions" are being spent on bringing about human-level intelligence in machines. In addition, opinions of experts are valuable in assessing their comfort level on advances in artificial intelligence, but if one is to build a sound case for the "intelligence explosion" that the author claims will happen, one will definitely need to offer a more quantitative case. The Vinge/Kurzweil conception of the "law of accelerating returns" and the associated concept of a technological "singularity" is with each passing year looking to be more of a sophisticated marketing campaign rather than sound science, and reliance on these conceptions is not bringing about a theory of machine intelligence that is practical and sound.There are also a few other difficulties in the claim that super-intelligent machines are destined to be our "final invention", mostly coming from basic physics and the manner in which scientific research and results are obtained. There are thermodynamic considerations and energy requirements that need to be addressed if such machines are to operate creatively in bringing about new scientific knowledge and practical products. A "super-intelligent" machine engaged in scientific research will need to conduct actual experiments, this being essential to science rather than just thoughtful musings, and this will require space, instrumentation, and a substantial amount of energy. These kinds of machines will also be subject to the ordinary laws of thermodynamics, and will have to deal with the heat they generate when such an "intelligent explosion" occurs.One might ignore all of these considerations and take the author's case as one that is more of a warning, just like some scientists had sounded off during the development of nuclear weapons. But to argue that super-intelligent machines are the biggest threat to our existence is to ignore the fact that it is the dumbest entity in the world today that has that privilege, namely the ordinary biological virus.
L**G
Great book - The Below Review was written by AI
This Review of the book (which I agree with) was written by ChatGPT with the prompt "Write a brief review of the book: Our Final Invention by Barrat":"Our Final Invention" by James Barrat is a thought-provoking and alarming book that discusses the potential dangers of artificial intelligence (AI) becoming smarter than humans and possibly turning against us. The book explores the history and development of AI, including its current state and its potential to surpass human intelligence. It also delves into the various ways AI could go rogue, such as hacking or programming errors, and the potential consequences of such an event.Barrat's writing style is engaging and accessible, making complex concepts easy to understand. He presents a balanced view of the subject matter, acknowledging both the benefits and risks of AI. The book features interviews with leading AI experts, which adds credibility to the author's arguments.Overall, "Our Final Invention" is a compelling read that raises important questions about the future of technology and its potential impact on humanity. It is a must-read for anyone interested in AI and its potential risks.
B**Z
We're all doomed!
Private Frazer: We're all doomed. Doomed!A well written but verbose account on the theme with variations of the rise of AGI/ASI and the inevitable demise of homo sapiens (probably). A kind of Frankenstein's monster lurking in the shadows in the not too distant future. These ideas are described to the reader ad nauseam. Frankly, once I got the theme (first few pages) I got bored of the variations/repetition and gave up about half way through. On the other hand, if you're a doom and gloom merchant, this will be right up your street...I was left wondering, though. What if ASI, instead of wiping us all out for its own ends, decides to seek enlightenment and becomes a Buddha instead?
P**N
Great read by an expert observer of AI
Written by someone who is passionate about AI and understands the subject very well. Barrat may at times sounds slightly hysterical about the risks but what if he's right? I venture on a more optimistic side of the debate but he's produced a great piece of work that should be read by everyone interested in the future of AI and the human race.
M**S
Our Final Invention
Fascinating and slightly scary, this book deals with the development of AI (Artificial Intelligence) and should be a warning to humanity to start debating the ethics and establish a 'Robot Manifesto' before we rush into creating entities without safeguards or control systems to protect humanity.Well written and a book you should read if you are concerned about the future, yours! and ours as a species.
J**L
5 star content, this book will scare you with ...
5 star content, this book will scare you with what it says about the future, and i'm a convert! however for my personal taste it was a bit heavy going at times, then again it isn't supposed to be a action fiction, so i suppose that is to be expected.
J**E
Repetitive
Interesting idea. Seems like it was repeated endlessly to fill the book.
Trustpilot
1 month ago
1 month ago