https://techxplore.com/news/2023-08-ibm ... erned.html
by Peter Grad , Tech Xplore
Deep neural networks are generating much of the exciting progress stemming from generative AI. But their architecture relies on a configuration that is a virtual speedbump, ensuring the maximal efficiency can not be obtained.
Constructed with separate units for memory and processing, neural networks face heavy demands on system resources for communications between the two components that results in slower speeds and reduced efficiency.
IBM Research came up with a better idea by turning to the perfect model for its inspiration for a more efficient digital brain: the human brain.
In a paper, "A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference," published in Nature Electronics Aug. 10, IBM researchers said they applied a new approach for a state-of-the-art mixed-signal AI chip that promises to improve efficiency and incur less battery drain in AI projects.