Stop Calling Everything AI, Machine-Learning Pioneer Says – IEEE Spectrum

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
THE INSTITUTE Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.
He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.
“People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”
Jordan should know the difference, after all. The IEEE Fellow is one of the world’s leading authorities on machine learning. In 2016 he was ranked as the most influential computer scientist by a program that analyzed research publications, Science reported. Jordan helped transform unsupervised machine learning, which can find structure in data without preexisting labels, from a collection of unrelated algorithms to an intellectually coherent field, the Engineering and Technology History Wiki explains. Unsupervised learning plays an important role in scientific applications where there is an absence of established theory that can provide labeled training data.
Jordan’s contributions have earned him many awards including this year’s Ulf Grenander Prize in Stochastic Theory and Modeling from the American Mathematical Society. Last year he received the IEEE John von Neumann Medal for his contributions to machine learning and data science.
In recent years, he has been on a mission to help scientists, engineers, and others understand the full scope of machine learning. He says he believes that developments in machine learning reflect the emergence of a new field of engineering. He draws parallels to the emergence of chemical engineering in the early 1900s from foundations in chemistry and fluid mechanics, noting that machine learning builds on decades of progress in computer science, statistics, and control theory. Moreover, he says, it is the first engineering field that is humancentric, focused on the interface between people and technology.
“While the science-fiction discussions about AI and super intelligence are fun, they are a distraction,” he says. “There’s not been enough focus on the real problem, which is building planetary-scale machine learning–based systems that actually work, deliver value to humans, and do not amplify inequities.”
As a child of the ’60s, Jordan has been interested in philosophical and cultural perspectives on how the mind works. He was inspired to study psychology and statistics after reading British logician Bertrand Russell’s autobiography. Russell explored thought as a logical mathematical process.
“Thinking about thought as a logical process and realizing that computers had arisen from software and hardware implementations of logic, I saw a parallel to the mind and the brain,” Jordan says. “It felt like philosophy could transition from vague discussions about the mind and brain to something more concrete, algorithmic, and logical. That attracted me.”
Jordan studied psychology at Louisiana State University, in Baton Rouge, where he earned a bachelor’s degree in 1978 in the subject. He earned a master’s degree in mathematics in 1980 from Arizona State University, in Tempe, and in 1985 a doctorate in cognitive science from the University of California, San Diego.
When he entered college, the field of machine learning didn’t exist. It had just begun to emerge when he graduated.
“While I was intrigued by machine learning,” he says, “I already felt at the time that the deeper principles needed to understand learning were to be found in statistics, information theory, and control theory, so I didn’t label myself as a machine-learning researcher. But I ended up embracing machine learning because there were interesting people in it, and creative work was being done.”
In 2003 he and his students developed latent Dirichlet allocation, a probabilistic framework for learning about the topical structure of documents and other data collections in an unsupervised manner, according to the Wiki. The technique lets the computer, not the user, discover patterns and information on its own from documents. The framework is one of the most popular topic modeling methods used to discover hidden themes and classify documents into categories.
Jordan’s current projects incorporate ideas from economics in his earlier blending of computer science and statistics. He argues that the goal of learning systems is to make decisions, or to support human decision-making, and decision-makers rarely operate in isolation. They interact with other decision-makers, each of whom might have different needs and values, and the overall interaction needs to be informed by economic principles. Jordan is developing “a research agenda in which agents learn about their preferences from real-world experimentation, where they blend exploration and exploitation as they collect data to learn from, and where market mechanisms can structure the learning process—providing incentives for learners to gather certain kinds of data and make certain kinds of coordinated decisions. The beneficiary of such research will be real-world systems that bring producers and consumers together in learning-based markets that are attentive to social welfare.”
In 2019 Jordan wrote “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” published in the Harvard Data Science Review. He explains in the article that the term AI is misunderstood not only by the public but also by technologists. Back in the 1950s, when the term was coined, he writes, people aspired to build computing machines that possessed human-level intelligence. That aspiration still exists, he says, but what has happened in the intervening decades is something different. Computers have not become intelligent per se, but they have provided capabilities that augment human intelligence, he writes. Moreover, they have excelled at low-level pattern-recognition capabilities that could be performed in principle by humans but at great cost. Machine learning–based systems are able to detect fraud in financial transactions at massive scale, for example, thereby catalyzing electronic commerce. They are essential in the modeling and control of supply chains in manufacturing and health care. They also help insurance agents, doctors, educators, and filmmakers.
Despite such developments being referred to as “AI technology,” he writes, the underlying systems do not involve high-level reasoning or thought. The systems do not form the kinds of semantic representations and inferences that humans are capable of. They do not formulate and pursue long-term goals.
“For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations,” he writes. “We will need well-thought-out interactions of humans and computers to solve our most pressing problems. We need to understand that the intelligent behavior of large-scale systems arises as much from the interactions among agents as from the intelligence of individual agents.”
Moreover, he emphasizes, human happiness should not be an afterthought when developing technology. “We have a real opportunity to conceive of something historically new: a humancentric engineering discipline,” he writes.
Jordan’s perspective includes a revitalized discussion of engineering’s role in public policy and academic research. He points out that when people talk about social science, it sounds appealing, but the term social engineering sounds unappealing. The same holds true for genome science versus genome engineering.
“I think that we’ve allowed the term engineering to become diminished in the intellectual sphere,” he says. The term science is used instead of engineering when people wish to refer to visionary research. Phrases such as just engineering don’t help.
“I think that it’s important to recall that for all of the wonderful things science has done for the human species, it really is engineering—civil, electrical, chemical, and other engineering fields—that has most directly and profoundly increased human happiness.”
Jordan says he values IEEE particularly for its investment in building mechanisms whereby communities can connect with each other through conferences and other forums.
He also appreciates IEEE’s thoughtful publishing policies. Many of his papers are available in the IEEE Xplore Digital Library.
“I think commercial publishing companies have built a business model that is now ineffectual and is actually blocking the flow of information,” he says. Through the open-access journal IEEE Access, he says, the organization is “allowing—and helping with—the flow of information.”
IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.
Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they’re involved in. She has a bachelor’s degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master’s degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.
Jordan’s observations about AI are very perceptive, and his comment on public perceptions of engineering are also important. Perhaps the public is not as concerned about “science” because it’s intended to gain knowledge, but they worry about “engineering” because it means doing something that they may not want done.
Calling stuff AI is a marketing ploy that will never stop as long as it can fool people into believing what they are trying to sell.

Artificial Intelligence is Genuine Stupidity, we need IA – intelligence amplification to help humans.

And ML has no learning involved at all. Again it is all marketing hype to fool people into thinking it is better than it is. It should properly be called training.

The problem is that it can only do what it is trained to do including all the implicit errors built into the dirty data used as well as the missing data that was not used.

As Heinlein said: Training is for Seals.

ASSuming that correlation is causation ensures so called AI will always fail BIG and worse than the many very small ‘successes’ it may have which reinforce the false belief in AI.

A half century ago, I was in (some) AI at Purdue. At that time, it was mostly about pattern recognition and adaptive systems. Today, it seems to still be about pattern recognition and adaptive systems.
Lattice confinement fusion eliminates massive magnets and powerful lasers
Physicists first suspected more than a century ago that the fusing of hydrogen into helium powers the sun. It took researchers many years to unravel the secrets by which lighter elements are smashed together into heavier ones inside stars, releasing energy in the process. And scientists and engineers have continued to study the sun’s fusion process in hopes of one day using nuclear fusion to generate heat or electricity. But the prospect of meeting our energy needs this way remains elusive.
The extraction of energy from nuclear fission, by contrast, happened relatively quickly. Fission in uranium was discovered in 1938, in Germany, and it was only four years until the first nuclear “pile” was constructed in Chicago, in 1942.
There are currently about 440 fission reactors operating worldwide, which together can generate about 400 gigawatts of power with zero carbon emissions. Yet these fission plants, for all their value, have considerable downsides. The enriched uranium fuel they use must be kept secure. Devastating accidents, like the one at Fukushima in Japan, can leave areas uninhabitable. Fission waste by-products need to be disposed of safely, and they remain radioactive for thousands of years. Consequently, governments, universities, and companies have long looked to fusion to remedy these ills.
Among those interested parties is NASA. The space agency has significant energy needs for deep-space travel, including probes and crewed missions to the moon and Mars. For more than 60 years, photovoltaic cells, fuel cells, or radioisotope thermoelectric generators (RTGs) have provided power to spacecraft. RTGs, which rely on the heat produced when nonfissile plutonium-238 decays, have demonstrated excellent longevity—both Voyager probes use such generators and remain operational nearly 45 years after their launch, for example. But these generators convert heat to electricity at roughly 7.5 percent efficiency. And modern spacecraft need more power than an RTG of reasonable size can provide.
One promising alternative is lattice confinement fusion (LCF), a type of fusion in which the nuclear fuel is bound in a metal lattice. The confinement encourages positively charged nuclei to fuse because the high electron density of the conductive metal reduces the likelihood that two nuclei will repel each other as they get closer together.
The deuterated erbium (chemical symbol ErD3) is placed into thumb-size vials, as shown in this set of samples from a 20 June 2018 experiment. Here, the vials are arrayed pre-experiment, with wipes on top of the metal to keep the metal in position during the experiment. The metal has begun to crack and break apart, indicating it is fully saturated. NASA
The vials are placed upside down to align the metal with the gamma ray beam. Gamma rays have turned the clear glass amber.NASA
We and other scientists and engineers at NASA Glenn Research Center, in Cleveland, are investigating whether this approach could one day provide enough power to operate small robotic probes on the surface of Mars, for example. LCF would eliminate the need for fissile materials such as enriched uranium, which can be costly to obtain and difficult to handle safely. LCF promises to be less expensive, smaller, and safer than other strategies for harnessing nuclear fusion. And as the technology matures, it could also find uses here on Earth, such as for small power plants for individual buildings, which would reduce fossil-fuel dependency and increase grid resiliency.
Physicists have long thought that fusion should be able to provide clean nuclear power. After all, the sun generates power this way. But the sun has a tremendous size advantage. At nearly 1.4 million kilometers in diameter, with a plasma core 150 times as dense as liquid water and heated to 15 million °C, the sun uses heat and gravity to force particles together and keep its fusion furnace stoked.
On Earth, we lack the ability to produce energy this way. A fusion reactor needs to reach a critical level of fuel-particle density, confinement time, and plasma temperature (called the Lawson Criteria after creator John Lawson) to achieve a net-positive energy output. And so far, nobody has done that.
In lattice confinement fusion (LCF), a beam of gamma rays is directed at a sample of erbium [shown here] or titanium saturated with deuterons. Occasionally, gamma rays of sufficient energy will break apart a deuteron in the metal lattice into its constituent proton and neutron.
The neutron collides with another deuteron in the lattice, imparting some of its own momentum to the deuteron. The electron-screened deuteron is now energetic enough to overcome the Coulomb barrier, which would typically repel it from another deuteron.
When the energetic deuteron fuses with another deuteron in the lattice, it can produce a helium-3 nucleus (helion) and give off useful energy. A leftover neutron could provide the push for another energetic deuteron elsewhere.
Alternatively, the fusing of the two deuterons could result in a hydrogen-3 nucleus (triton) and a leftover proton. This reaction also produces useful energy.
Another possible reaction in lattice confinement fusion would happen if an erbium atom instead rips apart the energetic deuteron and absorbs the proton. The extra proton changes the erbium atom to thulium and releases energy.
If the erbium atom absorbs the neutron, it becomes a new isotope of erbium. This is an Oppenheimer-Phillips (OP) stripping reaction. The proton from the broken-apart deuteron heats the lattice.
Fusion reactors commonly utilize two different hydrogen isotopes: deuterium (one proton and one neutron) and tritium (one proton and two neutrons). These are fused into helium nuclei (two protons and two neutrons)—also called alpha particles—with an unbound neutron left over.
Existing fusion reactors rely on the resulting alpha particles—and the energy released in the process of their creation—to further heat the plasma. The plasma will then drive more nuclear reactions with the end goal of providing a net power gain. But there are limits. Even in the hottest plasmas that reactors can create, alpha particles will mostly skip past additional deuterium nuclei without transferring much energy. For a fusion reactor to be successful, it needs to create as many direct hits between alpha particles and deuterium nuclei as possible.
In the 1950s, scientists created various magnetic-confinement fusion devices, the most well known of which were Andrei Sakharov’s tokamak and Lyman Spitzer’s stellarator. Setting aside differences in design particulars, each attempts the near-impossible: Heat a gas enough for it to become a plasma and magnetically squeeze it enough to ignite fusion—all without letting the plasma escape.
Inertial-confinement fusion devices followed in the 1970s. They used lasers and ion beams either to compress the surface of a target in a direct-drive implosion or to energize an interior target container in an indirect-drive implosion. Unlike magnetically confined reactions, which can last for seconds or even minutes (and perhaps one day, indefinitely), inertial-confinement fusion reactions last less than a microsecond before the target disassembles, thus ending the reaction.
Both types of devices can create fusion, but so far they are incapable of generating enough energy to offset what’s needed to initiate and maintain the nuclear reactions. In other words, more energy goes in than comes out. Hybrid approaches, collectively called magneto-inertial fusion, face the same issues.

Proton: Positively charged protons (along with neutrons) make up atomic nuclei. One component of lattice confinement fusion (LCF) may occur when a proton is absorbed by an erbium atom in a deuteron stripping reaction.

Neutron: Neutrally charged neutrons (along with protons) make up atomic nuclei. In fusion reactions, they impart energy to other particles such as deuterons. They also can be absorbed in Oppenheimer-Phillips reactions.

Erbium & Titanium: Erbium and titanium are the metals of choice for LCF. Relatively colossal compared with the other particles involved, they hold the deuterons and screen them from one another.

Deuterium: Deuterium is hydrogen with one proton and one neutron in its nucleus (hydrogen with just the proton is protium). Deuterium’s nucleus, call a deuteron, is crucial to LCF.

Deuteron: The nucleus of a deuterium atom. Deuterons are vital to LCF—the actual fusion instances occur when an energetic deuteron smashes into another in the lattice. They can also be broken apart in stripping reactions.

Hydrogen-3 (Tritium): One possible resulting particle from deuteron-deuteron fusion, alongside a leftover proton. Tritium has one proton and two neutrons in its nucleus, which is also called a triton.

Helium-3: One possible resulting particle from deuteron-deuteron fusion, alongside a leftover neutron. Helium-3 has two protons and one neutron in its nucleus, which is also called a helion.

Alpha particle: The core of a normal helium atom (two protons and two neutrons). Alpha particles are a commonplace result of typical fusion reactors, which often smash deuterium and tritium particles together. They can also emerge from LCF reactions.

Gamma ray: Extremely energetic photons that are used to kick off the fusion reactions in a metal lattice by breaking apart deuterons.
Current fusion reactors also require copious amounts of tritium as one part of their fuel mixture. The most reliable source of tritium is a fission reactor, which somewhat defeats the purpose of using fusion.
The fundamental problem of these techniques is that the atomic nuclei in the reactor need to be energetic enough—meaning hot enough—to overcome the Coulomb barrier, the natural tendency for the positively charged nuclei to repel one another. Because of the Coulomb barrier, fusing atomic nuclei have a very small fusion cross section, meaning the probability that two particles will fuse is low. You can increase the cross section by raising the plasma temperature to 100 million °C, but that requires increasingly heroic efforts to confine the plasma. As it stands, after billions of dollars of investment and decades of research, these approaches, which we’ll call “hot fusion,” still have a long way to go.
The barriers to hot fusion here on Earth are indeed tremendous. As you can imagine, they’d be even more overwhelming on a spacecraft, which can’t carry a tokamak or stellarator onboard. Fission reactors are being considered as an alternative—NASA successfully tested the Kilopower fission reactor at the Nevada National Security Site in 2018 using a uranium-235 core about the size of a paper towel roll. The Kilopower reactor could produce up to 10 kilowatts of electric power. The downside is that it required highly enriched uranium, which would have brought additional launch safety and security concerns. This fuel also costs a lot.
But fusion could still work, even if the conventional hot-fusion approaches are nonstarters. LCF technology could be compact enough, light enough, and simple enough to serve for spacecraft.
How does LCF work? Remember that we earlier mentioned deuterium, the isotope of hydrogen with one proton and one neutron in its nucleus. Deuterided metals—erbium and titanium, in our experiments—have been “saturated” with either deuterium or deuterium atoms stripped of their electrons (deuterons). This is possible because the metal naturally exists in a regularly spaced lattice structure, which creates equally regular slots in between the metal atoms for deuterons to nest.
In a tokamak or a stellarator, the hot plasma is limited to a density of 10 14 deuterons per cubic centimeter. Inertial-confinement fusion devices can momentarily reach densities of 1026 deuterons per cubic centimeter. It turns out that metals like erbium can indefinitely hold deuterons at a density of nearly 1023 per cubic centimeter—far higher than the density that can be attained in a magnetic-confinement device, and only three orders of magnitude below that attained in an inertial-confinement device. Crucially, these metals can hold that many ions at room temperature.
The deuteron-saturated metal forms a plasma with neutral charge. The metal lattice confines and electron-screens the deuterons, keeping each of them from “seeing” adjacent deuterons (which are all positively charged). This screening increases the chances of more direct hits, which further promotes the fusion reaction. Without the electron screening, two deuterons would be much more likely to repel each other.
Using a metal lattice that has screened a dense, cold plasma of deuterons, we can jump-start the fusion process using what is called a Dynamitron electron-beam accelerator. The electron beam hits a tantalum target and produces gamma rays, which then irradiate thumb-size vials containing titanium deuteride or erbium deuteride.
When a gamma ray of sufficient energy—about 2.2 megaelectron volts (MeV)—strikes one of the deuterons in the metal lattice, the deuteron breaks apart into its constituent proton and neutron. The released neutron may collide with another deuteron, accelerating it much as a pool cue accelerates a ball when striking it. This second, energetic deuteron then goes through one of two processes: screened fusion or a stripping reaction.
In screened fusion, which we have observed in our experiments, the energetic deuteron fuses with another deuteron in the lattice. The fusion reaction will result in either a helium-3 nucleus and a leftover neutron or a hydrogen-3 nucleus and a leftover proton. These fusion products may fuse with other deuterons, creating an alpha particle, or with another helium-3 or hydrogen-3 nucleus. Each of these nuclear reactions releases energy, helping to drive more instances of fusion.
In a stripping reaction, an atom like the titanium or erbium in our experiments strips the proton or neutron from the deuteron and captures that proton or neutron. Erbium, titanium, and other heavier atoms preferentially absorb the neutron because the proton is repulsed by the positively charged nucleus (called an Oppenheimer-Phillips reaction). It is theoretically possible, although we haven’t observed it, that the electron screening might allow the proton to be captured, transforming erbium into thulium or titanium into vanadium. Both kinds of stripping reactions would produce useful energy.
As it stands, after billions of dollars of investment and decades of research, these approaches, which we’ll call “hot fusion,” still have a long way to go.
To be sure that we were actually producing fusion in our vials of erbium deuteride and titanium deuteride, we used neutron spectroscopy. This technique detects the neutrons that result from fusion reactions. When deuteron-deuteron fusion produces a helium-3 nucleus and a neutron, that neutron has an energy of 2.45 MeV. So when we detected 2.45 MeV neutrons, we knew fusion had occurred. That’s when we published our initial results in Physical Review C.
Electron screening makes it seem as though the deuterons are fusing at a temperature of 11 million °C. In reality, the metal lattice remains much cooler than that, although it heats up somewhat from room temperature as the deuterons fuse.
Rich Martin [left], a research engineer, and coauthor Bruce Steinetz, principal investigator for the LCF project’s precursor experiment, examine samples after a run. NASA
Overall, in LCF, most of the heating occurs in regions just tens of micrometers across. This is far more efficient than in magnetic- or inertial-confinement fusion reactors, which heat up the entire fuel amount to very high temperatures. LCF isn’t cold fusion—it still requires energetic deuterons and can use neutrons to heat them. However, LCF also removes many of the technologic and engineering barriers that have prevented other fusion schemes from being successful.
Although the neutron recoil technique we’ve been using is the most efficient means to transfer energy to cold deuterons, producing neutrons from a Dynamitron is energy intensive. There are other, lower energy methods of producing neutrons including using an isotopic neutron source, like americium-beryllium or californium-252, to initiate the reactions. We also need to make the reaction self-sustaining, which may be possible using neutron reflectors to bounce neutrons back into the lattice—carbon and beryllium are examples of common neutron reflectors. Another option is to couple a fusion neutron source with fission fuel to take advantage of the best of both worlds. Regardless, there’s more development of the process required to increase the efficiency of these lattice-confined nuclear reactions.
We’ve also triggered nuclear reactions by pumping deuterium gas through a thin wall of a palladium-silver alloy tubing, and by electrolytically loading palladium with deuterium. In the latter experiment, we’ve detected fast neutrons. The electrolytic setup is now using the same neutron-spectroscopy detection method we mentioned above to measure the energy of those neutrons. The energy measurements we get will inform us about the kinds of nuclear reaction that produce them.
We’re not alone in these endeavors. Researchers at Lawrence Berkeley National Laboratory, in California, with funding from Google Research, achieved favorable results with a similar electron-screened fusion setup. Researchers at the U.S. Naval Surface Warfare Center, Indian Head Division, in Maryland have likewise gotten promising initial results using an electrochemical approach to LCF. There are also upcoming conferences: the American Nuclear Society’s Nuclear and Emerging Technologies for Space conference in Cleveland in May and the International Conference on Cold Fusion 24, focused on solid-state energy, in Mountain View, Calif., in July.
Any practical application of LCF will require efficient, self-sustaining reactions. Our work represents just the first step toward realizing that goal. If the reaction rates can be significantly boosted, LCF may open an entirely new door for generating clean nuclear energy, both for space missions and for the many people who could use it here on Earth.

source

Stay in the Loop

Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...