====== Chris_Young ====== {{wiki:Chris young.jpg}} B.S, Electrical Engineering; B.A, Economics, The University of Texas at Austin, 2010 M.S, Electrical Engineering, Stanford University, 2012
Admitted to Ph.D. Candidacy: 2013
**Research:   CMOS Image Sensor Design for Low Power Object Detection**
Machine learning (ML) and big-data open the door to limitless possibilities for innovative devices that possess some amount of intelligence previously not realized. However, due to the intense computational and memory requirements of ML based algorithms, the application space is often limited due to power consumption. By leveraging ML techniques at the sensor level, we can dramatically reduce system power consumption to enable “always-on” capabilities. In this work, we consider imaging for object detection. Applications include advanced driver assistance systems (ADAS), wakeup mechanisms, home automation, and augmented reality to name a few.
Our goal is to create a CMOS vision sensor with power consumption comparable to the state-of-the-art always-on image sensors [[1]], yet maximize system detection accuracy and maintain ability to operate in real word conditions. Many previous works have demonstrated vision sensors [[2,|3]], however they are a somewhat “brute-force” mapping of a detector’s feature extraction step onto analog hardware. In addition, there is often little consideration for potential energy of backend classifier algorithms or functionality in real-world conditions. To address this issue, we build upon a system previously proposed in our group [[4]] that utilizes histograms-of-oriented-gradients (HOG) features but modifies the gradient calculation. The gradient is normally defined as differences of pixel intensities, but instead, we propose to calculate the gradients as ratios of pixel intensities. We have shown empirically that these ratios can be quantized to as few as 2-bits with no loss in detection accuracy when compared to an 8-bit imager. This ultimately results in a 20x reduction in data in the resultant HOG feature vectors.
While many imagers with embedded computation have creative pixel designs and readout schemes, they often have multiple weaknesses related to area, dynamic range, or practicality in non-contrived test conditions. Extensive simulations with our proposed detection system show that while we can drastically reduce the number of bits produced from a scene, we must still image that scene with dynamic range similar to standard imagers. Consequently, we have chosen to use a standard 4-T pixel array with pinned photo-diode as this architecture is constantly being optimized in specialized processes for maximizing imaging metrics.
Our ratio-gradient calculation circuitry is designed at the column level and consists of two main blocks: a 3-row analog memory that performs the pixel CDS and stores the resultant values as a circular row buffer, and 2-bit column-parallel, ratio-to-digital converters (RDCs) that are similar to SARs. Because of their compact size and the detection algorithm’s tolerance to non-linear, yet high-density moscaps, we are able to use these RDCs over what is typically an inefficient single-slope ADC. As it is important for detection, the analog memory also has the ability to bin pixels and effectively create lower resolution gradient maps for generating image pyramids in consecutive frames. A complete system would likely see the detector integrated on the same chip as the imager, but for this work, we are curre designing a QVGA (320x240) proof-of-concept imager to generate these ratio-gradients in a 0.13um CIS process with 5um pixels and plan to tape-out in Q4 of 2017. Our target power consumption is <200uW @ 30fps.
      {{wiki:Young comparison plot 072017.png?500|Young comparison plot 072017.png}}               {{wiki:Young sys blk diag 072017.png?500|Young sys blk diag 072017.png}}

[[1]] J. Choi, J. Shin, D. Kang and D. S. Park, "6.3 A 45.5μW 15fps always-on CMOS image sensor for mobile and wearable devices," 2015 IEEE International Solid-State Circuits Conference - (ISSCC) Digest of Technical Papers, San Francisco, CA, 2015.
[[2]] J.Choi, J.Cho, S.Park, and E.Yoon, “A 3.4 uW Object-Adaptive CMOS Image Sensor with Embedded Feature Extraction Algorithm for Motion-Triggered Object-of-Interest Imaging,” IEEE J. Solid- State Circuits, vol. 49, no. 1, pp. 289–300, Jan. 2014. [[3]] A. Berkovich, M. Lecca, L. Gasparini, P. A. Abshire and M. Gottardi, "A 30 μW 30 fps 110 × 110 Pixels Vision Sensor Embedding Local Binary Patterns," in IEEE JSSC, vol. 50, no. 9, Sept. 2015.
[[4]] A. Omid-Zohoor; C. Young; D. Ta; B. Murmann, "Towards Always-On Mobile Object Detection: Energy vs. Performance Tradeoffs for Embedded HOG Feature Extraction," in IEEE Transactions on Circuits and Systems for Video Technology

**Email: ** [[mailto:cjyoung@stanford.edu|cjyoung AT stanford DOT edu]]