This book gives a detailed overview of a universal Maximum Likelihood (ML) decoding technique, known as Guessing Random Additive Noise Decoding (GRAND), has been introduced for short-length and high-rate linear block codes. The interest in short channel codes and the corresponding ML decoding algorithms has recently been reignited in both industry and academia due to emergence of applications with strict reliability and ultra-low latency requirements . A few of these applications include Machine-to-Machine (M2M) communication, augmented and virtual Reality, Intelligent Transportation Systems (ITS), the Internet of Things (IoTs), and Ultra-Reliable and Low Latency Communications (URLLC), which is an important use case for the 5G-NR standard.
GRAND features both soft-input and hard-input variants. Moreover, there are traditional GRAND variants that can be used with any communication channel, and specialized GRAND variants that are developed for a specific communication channel. This book presents a detailed overview of these GRAND variants and their hardware architectures.
The book is structured into four parts. Part 1 introduces linear block codes and the GRAND algorithm. Part 2 discusses the hardware architecture for traditional GRAND variants that can be applied to any underlying communication channel. Part 3 describes the hardware architectures for specialized GRAND variants developed for specific communication channels. Lastly, Part 4 provides an overview of recently proposed GRAND variants and their unique applications. This book is ideal for researchers or engineers looking to implement high-throughput and energy-efficient hardware for GRAND, as well as seasoned academics and graduate students interested in the topic of VLSI hardware architectures. Additionally, it can serve as reading material in graduate courses covering modern error correcting codes and Maximum Likelihood decoding for short codes.
Just click on START button on Telegram Bot