Optical neural network at 50zJ per op? Nope, but it’s still a good idea

Ars Technica » Scientific Method 2019-05-20

Optical neural network at 50zJ per op? Nope, but it’s still a good idea

Enlarge (credit: BeeBright/Getty Images)

Artificial intelligence (AI) has experienced a revival of pretty large proportions in the last decade. We've gone from AI being mostly useless to letting it ruin our lives in obscure and opaque ways. We’ve even given AI the task of crashing our cars for us.

AI experts will tell us that we just need bigger neural networks and the cars will probably stop crashing. You can get there by adding more graphics cards to an AI, but the power consumption becomes excessive. The ideal solution would be a neural network that can process and shovel data around at near-zero energy cost, which may be where we are headed with optical neural networks.

To give you an idea of the scale of energy we're talking about here, a good GPU uses 20 picoJoules (1pJ is 10-12J ) for each multiply and accumulate operation. A purpose-built integrated circuit can reduce that to about 1pJ. But if a team of researchers is correct, an optical neural network might reduce that number to an incredible 50 zeptoJoules (1zJ is 10-21J).

Read 15 remaining paragraphs | Comments