Next Event

Date: August 16th, 2019
State of AI and ML-Summer 2019

Register

    

Want to volunteer?

The IEEE SCV CAS chapter is seeking volunteers to help with the organization of technical meetings. Please contact us.

    

SCV-CAS Mailing List

To subscribe or unsubcribe, please visit the IEEE SCV-CAS list.

Events on January, 2019

Lecture by Dr. Ratnesh Kumar “Vehicle Re-identification for Smart Cities: A New Baseline Using Triplet Embedding”

Date: January 31st, 2019

DESCRIPTION:

“Vehicle Re-identification for Smart Cities: A New Baseline Using Triplet Embedding”

Dr. Ratnesh Kumar, Deep Learning Architect, Nvidia Corporation, San Jose, CA

Event Organized By:

Circuits and Systems Society (CASS) of the IEEE Santa Clara Valley Section

Co-sponsors:

Registration Link:

Click here to register.

PROGRAM:

6:00 – 6:30 PM Networking & Refreshments
6:30 – 7:45 PM Talk
7:45 – 8:00 PM Q&A/Adjourn

Watch the lecture live on Zoom from your home and anywhere around the world! Register now and you will be sent details one day before the event.

Abstract:

With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. In this talk we will discuss an exhaustive evaluation of deep embedding losses applied to vehicle re-identification, and demonstrate that using the best practices for learning-embeddings outperform most of the previous approaches in vehicle re-identification.

Bio:

Ratnesh Kumar is currently a Deep Learning Architect at Nvidia USA from January 2017. He has obtained his PhD from STARS team at Inria, France in Dec 2014. His research focus during PhD was on long term video segmentation using optical flow and multiple object tracking. Subsequently he worked as Postdoc at Mitsubishi Electric Research Labs (MERL) Cambridge, Boston, on detection actions in streaming videos. He also holds Bachelors in Engineering from Manipal University, India and Master of Science from University of Florida at Gainesville, USA. At Nvidia since 2017, his focus is on leveraging deep learning on hardware accelerated GPU platforms to solve several problems in video analytics ranging from object detection to re-identification and action detection, with low latency and high data throughput. He is co-author of 9 scientific publications in conferences and journals and has several patents pending. He is also a plenary speaker at IEEE International Conference on Image Processing Applications and Systems (IPAS) 2018. He has also served as organizing member for AI-CITIES 2018 challenge for smart cities.

Venue:

Cypress Semiconductor Corporation, Main Auditorium in Building 6, 198 Champion Ct, San Jose, CA 95134

Convenient VTA light rail access from Mountain View and downtown San Jose.

Live Broadcast:

Lecture will be broadcast live on Zoom. Registrants will be sent the conference details one day before the event.

Admission Fee:

Non-IEEE: $5

Students (non-IEEE): $3

IEEE Members (not members of CASS or SSCS): $3

IEEE CASS and SSCS Members: Free

Open to all to attend.

Online registration is recommended to guarantee seating.


“Mixed-Signal Processing For Machine Learning”, Dr. Daniel Bankman, Stanford University

Date: January 31st, 2019

Time: 6:00-8:00PM

Location: Texas Instruments Building E Conference Center, 2900 Semiconductor Drive, Santa Clara, CA 95051

Directions: TI-BldgE-Auditorium.pdf

Registration Link: (Mandatory) : Link

Registration Fee: FREE, Donation Requested

IEEE SSCS/CAS/SPS/CS members: FREE
IEEE members – $2 donation
Non-members – $5

Abstract:

Recent advancements in machine learning algorithms, hardware, and datasets have led to the successful deployment of deep neural networks (DNNs) in various cloud-based services. Today, new applications are emerging where sensor bandwidth is higher than network bandwidth, and where network latency is not tolerable. By pushing DNN processing closer to the sensor, we can avoid throwing data away and improve the user experience. In the long term, it is foreseeable that such DNN processors will run on harvested energy, eliminating the cost and overhead of wired power connections and battery replacement. A significant challenge arises from the fact that DNNs are both memory and compute intensive, requiring millions of parameters and billions of arithmetic operations to perform a single inference. In this talk, I will present circuit and architecture techniques that leverage the noise tolerance and parallel structure of DNNs to bring inference systems closer to the energy-efficiency limits of CMOS technology.

In the low SNR regime where DNNs operate, thermally-limited analog signal processing circuits are more energy-efficient than digital. However, the massive scale of DNNs favors circuits compatible with dense digital memory. Mixed-signal processing allows us to integrate analog efficiency with digital scalability, but close attention must be paid to energy consumed at the analog-digital interface and in memory access. Binarized neural networks minimize this overhead, and hence operate closer to the analog energy limit. I will present a mixed-signal binary convolutional neural network processor implemented in 28 nm CMOS, featuring a weight-stationary, parallel-processing architecture that amortizes memory access across many computations, and a switched-capacitor neuron array that consumes an order of magnitude lower energy than synthesized digital arithmetic at the same application-level accuracy. I will provide an apples-to-apples comparison of the mixed-signal, hand-designed digital, and synthesized digital implementations of this architecture. I will conclude with future research directions centered around the idea of training neural networks where the transfer function of a neuron models the behavior of an energy-efficient, physically realizable circuit.

Bio:

Daniel Bankman is a PhD candidate in the Department of Electrical Engineering at Stanford University, advised by Prof. Boris Murmann. His research focuses on mixed-signal processing circuits, hardware architectures, and neural architectures capable of bringing machine learning closer to the energy limits of scaled semiconductor technology. During his PhD, he demonstrated that switched-capacitor circuits can significantly lower the energy consumption of binarized neural networks while preserving the same application-level accuracy as digital static CMOS arithmetic. Daniel received the S.B. degree in electrical engineering from MIT in 2012 and the M.S. degree from Stanford in 2015. He has held internship positions at Analog Devices Lyric Labs and Intel AI Research, and served as the instructor for EE315 Analog-Digital Interface Circuits at Stanford in 2018.


  • January 2019
    M T W T F S S
    « Dec   Feb »
     123456
    78910111213
    14151617181920
    21222324252627
    28293031