# The Data Conversion Bottleneck in Analog Computing Accelerators

*J. T. Meech, V. Tsoutsouras, and P. Stanley-Marbell, *First Workshop on Machine Learning with New Compute Paradigms at NeurIPS 2023*.*

## Abstract

Most modern computing tasks have digital electronic input and output data. Due to these constraints imposed by real-world use cases of computer systems, any analog computing accelerator, whether analog electronic or optical, must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. The energy and latency costs incurred by data conversion place performance limits on analog computing accelerators. To avoid this overhead, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in optical hardware. This article presents a case study that profiles 27 benchmarks for an analog optical Fourier transform and convolution accelerator which we designed and built. The case study shows that an ideal optical Fourier transform and convolution accelerator can produce an average speedup of 9.4 times and a median speedup of 1.9 times for the set of benchmarks. The optical Fourier transform and convolution accelerator only produces significant speedup for pure Fourier transform (45.3 times) and convolution (159.4 times) applications.

## Cite as:

James T. Meech, Vasileios Tsoutsouras, and Phillip Stanley-Marbell, “The Data Conversion Bottleneck in Analog Computing Accelerators” in the First Workshop on Machine Learning with New Compute Paradigms at NeurIPS 2023 (MLNPCP 2023)

## Bibtex:

```
@misc{meech2023data,
title={The Data Conversion Bottleneck in Analog Computing
Accelerators},
author={James T. Meech, Vasileios Tsoutsouras, and Phillip Stanley-
Marbell},
year={2023},
eprint={2308.01719},
archivePrefix={arXiv},
primaryClass={cs.AR}
}
```