IEEE

Archive for the ‘Invited Talks’ Category

IEEE SP chapter talk: 14 June 2019 – Physics-Based Vision and Learning

Thursday, June 13th, 2019

 

Department of Electrical Engineering (EE), IISc and IEEE SPS Bangalore Chapter invite you to the following seminar:

Title: “Physics-Based Vision and Learning ”
(Joint Work with Yunhao Ba and Guangyuan Zhao)

Speaker: Dr. Achuta Kadambi

Achuta Kadambi

Host faculty : Dr. Chandra Sekhar Seelamantula

Time & Venue : 14th June, 2019, Friday on 4:00 PM at MMCR first floor, Electrical Engineering Department, IISc.

Abstract: Today, deep learning is the de facto approach to solving many computer vision problems. However, in adopting deep learning, one may overlook a subtlety: the physics of how light interacts with matter. By exploiting these previously overlooked subtleties, we will describe how we can rethink the longstanding problem of 3D reconstruction. Using the lessons learned from this prior work, we will then discuss the future symbiosis between physics and machine learning, and how this fusion can transform many application areas in imaging.

Biography: Achuta Kadambi is an Assistant Professor of Electrical and Computer Engineering at UCLA, where he directs the Visual Machines Group. The group blends the physics of light with artificial intelligence to give the gift of sight to robots. Achuta received his BS from UC Berkeley and his PhD from MIT, completing an interdepartmental doctorate between the MIT Media Lab and MIT EECS. Please see his group web page for research specifics: http://visual.ee.ucla.edu

Talk: August 27, 2015: Effective exploitation of long-term correlations for audio coding and networking

Wednesday, August 26th, 2015

The Department of Electrical Engineering
and
IEEE Signal Processing Society, Bangalore Chapter

cordially invite you to a lecture on

Title: Effective exploitation of long-term correlations for audio coding
and networking

Speaker: Dr. Tejaswi Nanjundaswamy, Postdoc fellow, UC Santa Barbara

Venue: Multimedia Classroom (MMCR), EE Department

Date and Time: August 27, 2015; 4.30 PM to 5.30 PM.

Abstract: A wide range of multimedia applications such as internet radio
and television, online media streaming, gaming, and high fidelity
teleconferencing heavily rely on efficient transmission of audio signals
over networks. The two main challenges for such transmission is delay
constrained compression, and dealing with loss of content due to noisy
channels. Constraints on delay means that the algorithms can only
operate on small block sizes (or frame lengths). Thus the key to
addressing these challenges is efficiently exploiting inter-frame
redundancies due to long term correlations. While well known audio
coders are effective in eliminating redundancies within a block of data,
and the only known inter-frame redundancy removal technique of employing
a long term prediction (LTP) filter is too simplistic, as it is
suboptimal for the commonly occurring polyphonic audio signals, which
contain a mixture of several periodic components, and also suboptimal
for speech and vocal content, which is quasi-periodic with small
variations in pitch period. Moreover the typically employed parameter
estimation technique is mismatched to the ultimate perceptual distortion
criteria of audio coding. Similarly even in loss concealment, none of
the existing techniques are designed to overcome the main challenge due
to the polyphonic nature of most music signals. This talk covers
contributions towards addressing all these shortcomings by employing
novel sophisticated filter structures suitable for a wide variety of
audio signals, with parameter estimation which takes into account the
perceptual distortion criteria for audio compression, and utilizes all
the available information for loss concealment.

Biography of the speaker: Tejaswi Nanjundaswamy received his B.E degree in electronics and
communications engineering from the National Institute of Technology
Karnataka, India, in 2004, the M.S. and the PhD. degree in electrical
and computer engineering from UCSB, in 2009 and 2013, respectively. He
is currently a post-doctoral researcher at the Signal Compression Lab in
UCSB, where he focuses on audio/video compression, processing and
related technologies. He worked at Ittiam Systems, Bangalore, India from
2004 to 2008 as Senior Engineer in the Audio group. He won the Student
Technical Paper Award at the Audio Engineering Society’s 129th
Convention. He was also a Best Paper Award Finalist at IEEE Workshop on
Applications of Signal Processing to Audio and Acoustics (WASPAA) 2011
and co-author of a Top 10% Award winning paper at IEEE International
Conference on Image Processing (ICIP) 2015.

Talk: May 29, 2015: Bayesian Models for Computational Rhythm Analysis in Indian Art Music

Saturday, May 16th, 2015

Department of Electrical Communication Engineering and
Department of Electrical Engineering,
Indian Institute of Science, Bangalore

invite you to a talk by
Ajay Srinivasamurthy
Music Technology Group, Universitat Pompeu Fabra
Barcelona, Spain
on

Bayesian Models for Computational Rhythm Analysis in Indian Art Music

Time & Date: 11 AM, May 29, 2015.
Venue: Golden Jubilee Hall, Department of Electrical Communication Engineering (ECE)

Abstract
Rhythm in Indian art music (Carnatic and Hindustani music) is organized in the framework of tāla (or tāl). A tala consists of time cycles that provide a broad structure for repetition of music phrases, motifs and improvisations. Detecting different events such as the beats and the sama (downbeats) within a tāla cycle and tracking them through a piece of audio recording, referred to as meter inference, is an important computational rhythm analysis task in Indian Art Music. Useful in itself, it additionally provides a basis for further computational analyses such as structural analysis and extraction of rhythmic patterns. The talk mainly focuses on my recent work with Bayesian models for meter inference in Carnatic music. Efficient approximate inference in these models are presented to overcome the limitations of exact inference. Further,I will discuss extensions of these models that generalize to other music styles and different metrical structures.

Biography of the Speaker
Ajay is a PhD student at the Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain. He is a part of the CompMusic project led by Prof. Xavier Serra, where he works on rhythm related research problems in Indian Art Music and Beijing Opera. He is currently working towards developing signal processing and machine learning approaches for the analysis and characterization of rhythmic structures and patterns from audio recordings. Prior to joining UPF, he was a research assistant at the Georgia Tech Center for Music Technology, Atlanta, USA and worked at the mobile music startup Smule Inc. He has a masters in Signal Processing from the Indian Institute of Science, Bangalore, India and a B.Tech in Electronics and Communication Engineering from National Institute of Technology Karnataka, Surathkal, India.

Co-sponsors of the event:
IEEE Bangalore section
IEEE SP Society Bangalore Chapter

Special Lecture: 10 March 2015: Our recent work related to the design of filter banks with uncertainty considerations

Monday, March 9th, 2015

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2015 – SPECIAL LECTURE – 2

———————————————————————————————-

Dear All:

Speaker: Prof. V. M. Gadre, IIT Bombay, Mumbai, India
Title: Our recent work related to the design of filter banks with uncertainty considerations

Date/time: Tuesday, 10th March 2015, 3pm (coffee/tea at 2.45pm)
Venue: ECE GJH

Abstract: Two band filter banks have typically been designed with passband and stopband considerations and there is scant work on designing them with the uncertainty criterion. In this talk, we present some of the work that we have done in designing the two band filter bank with the uncertainty principle as a criterion.

Bio: Prof. Vikram Gadre received his Ph. D in Electrical Engineering from Indian Institute of Technology (IIT) Delhi in 1994, and thereafter has been working at IIT Bombay. He is currently the head of Centre for Distance Engineering Education. His research areas are communication and signal processing. He has taught 16 courses like signals and systems, control systems. He received ‘Excellence in Teaching’ award from IIT Bombay in 1999, 2004, and 2009. He has guided sponsored research projects for organizations like Tata Infotech, Texas Instruments, and 9 PhD, 65 M.Tech and Dual Degree, and 45 B.Tech projects. He has authored two books and has published 60 conference proceedings, 23 journal papers, and chapters in edited monographs.

———————————————————————————————-

Talks: 18 December 2014

Sunday, February 15th, 2015

IEEE Signal Processing Society, Bangalore Chapter, IEEE Bangalore Section,
and Supercomputer Education Research Centre, Indian Institute of Science

Invite you to the following talks:

1) Title: Beyond Mindless Labeling: *Really* Leveraging Humans to Build
Intelligent Machines

by Dr. Devi Parikh, Assistant Prof., Virginia Tech.
https://filebox.ece.vt.edu/~parikh/

Abstract:

Human ability to understand natural images far exceeds machines today. One
reason for this gap is an artificially restrictive learning set up –
humans today teach machines via Morse code (e.g. providing binary labels
on images, such as “this is a horse” or “this is not”), and machines are
typically silent. These systems have the potential to be significantly
more accurate if they tap into the vast common-sense knowledge humans have
about the visual world. I will talk about our work on enriching the
communication between humans and machines by exploiting mid-level visual
properties or attributes. I will also talk about the more difficult
problem of directly learning common-sense knowledge simply by observing
the structure of the visual world around us. Unfortunately, this requires
automatic and accurate detection of objects, their attributes, poses, etc.
in images, leading to a chicken-and-egg problem. I will argue that the
solution is to give up on photorealism. Specifically, I will talk about
our work on exploiting human-generated abstract visual scenes to learn
common-sense knowledge and study high-level vision problems.

———————————————————————-

2) Title: Hedging Against Uncertainty in Machine Perception via Multiple
Diverse Predictions

by Dr. Duruv Bhatra, Assistant Prof., Virginia Tech.
https://filebox.ece.vt.edu/~dbatra/

Abstract:

What does a young child or a high-school student with no knowledge of
probability do when faced with a problem whose answer they are uncertain
of? They make guesses.

Modern machine perception algorithms (for object detection, pose
estimation, or semantic scene understanding), despite dealing with
tremendous amounts of ambiguity, do not.

In this talk, I will describe a line of work in my lab where we have been
developing machine perception models that output not just a single-best
solution, rather a /diverse/ set of plausible guesses. I will discuss
inference in graphical models, connections to submodular maximization over
a “doubly-exponential” space, and how/why this achieves state-of-art
performance on challenging Pascal VOC 2012 segmentation dataset.

—————————————————————-
ALL ARE WELCOME

SPECIAL LECTURE: 17 October 2014 : Design of Sparse Antenna Arrays

Wednesday, February 11th, 2015

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2014 – SPECIAL LECTURE – 5

———————————————————————————————-
Dear All,

Title: Design of Sparse Antenna Arrays

Speaker: Prof. Sanjit Mitra, Univ. of California, Santa Barbara, and Univ.
of Southern California

Date: 17 October 2014 (Friday)

Time: 11:00am (tea/coffee at 10:45am)

Venue: Golden Jubilee Hall, ECE Dept., IISc Bangalore

Abstract: In this talk we present a general method for the factorization
of a polynomial with unity coefficients into a product of sparse
polynomials with fewer non-zero coefficients with unity values. The
factorization method is then used to design sparse antenna arrays with
uniform and linearly tapered effective aperture functions.

Bio: Sanjit K. Mitra is a Research Professor in the Department of
Electrical & Computer Engineering, University of California, Santa Barbara
and Professor Emeritus, Ming Hsieh Department of Electrical Engineering,
University of Southern California, Los Angeles.

He has held visiting appointments in Australia, Austria, Brazil, Croatia,
Finland, Germany, India, Japan, Norway, Singapore, Turkey, and the United
Kingdom.

Dr. Mitra has published over 700 papers in the areas of analog and digital
signal processing, and image processing. He has also authored and
co-authored twelve books, and holds five patents. He has presented 29
keynote and/or plenary lectures at conferences held in the United States
and 16 countries abroad. Dr. Mitra has served IEEE in various capacities
including service as the President of the IEEE Circuits & Systems Society
in 1986.

Dr. Mitra is the recipient of the 1973 F.E. Terman Award and the 1985 AT&T
Foundation Award of the American Society of Engineering Education; the
1989 Education Award, and the 2000 Mac Van Valkenburg Society Award of the
IEEE Circuits & Systems Society; the Distinguished Senior U.S. Scientist
Award from the Alexander von Humboldt Foundation of Germany in 1989; the
1996 Technical Achievement Award, the 2001 Society Award and the 2006
Education Award of the IEEE Signal Processing Society; the IEEE Millennium
Medal in 2000; the McGraw-Hill/Jacob Millman Award of the IEEE Education
Society in 2001; the 2002 Technical Achievement Award and the 2009
Athanasios Papoulis Award of the European Association for Signal
Processing, the 2005 SPIE Technology Achievement Award of the
International Society for Optical Engineers; the University Medal of the
Slovak Technical University, Bratislava, Slovakia in 2005; the 2006 IEEE
James H. Mulligan, Jr. Education Medal; and the 2013 IEEE Gustav Robert
Kirchhoff Award. He is the corecipient of the 2000 Blumlein-Browne-Willans
Premium of the Institution of Electrical Engineers (London) and the 2001
IEEE Transactions on Circuits & Systems for Video Technology Best Paper
Award. He has been awarded Honorary Doctorate degrees from the Tampere
University of Technology, Finland, the Technical University of Bucharest,
Romania, and the Technical University of Iasi, Romania.

He is a member of the U.S. National Academy of Engineering, a member of
the Norwegian Academy of Technological Sciences, an Academician of the
Academy of Finland, a foreign member of the Finnish Academy of Sciences
and Arts, a corresponding member of the Croatian Academy of Sciences and
Arts, and the Academy of Engineering, Mexico, and a Foreign Fellow of the
National Academy of Sciences, India and the Indian National Academy of
Engineering. Dr. Mitra is a Fellow of the IEEE, AAAS, and SPIE.

All are invited

SPECIAL LECTURE: May 29th, 2014 : Frequency modulated continuous wave radars: A gentle introduction to angle resolution and its challenges

Wednesday, February 11th, 2015

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2014 – SPECIAL LECTURE – 2

———————————————————————————————-

Title:  Frequency modulated continuous wave radars: A gentle introduction to angle resolution and its challenges
 
Date: Thursday, May 29th, 2014
Time: 4pm
Tea/coffee: 3.45pm
Venue: Golden Jubilee Hall, ECE Department, IISc Bangalore
Speaker: Sandeep Rao, Texas Instruments India
 
Abstract :
77GHz FMCW radars have been used in the automotive and niche industrial applications for a while now. However as technology improves and the levels of integration increase, there is a tremendous potential for these radars to be used in wide variety of other applications. As the use of radar becomes more wide-spread, its design becomes more cost-sensitive. One of the key design aspects which contributes to the cost of an FMCW radar is the number of antennas. Hence the motivation to get the maximum performance out of a small number of antennas. The first part of this talk will be a survey of some relevant aspects of angle estimation. We will highlight the importance  of multi-resolution techniques and discuss some related algorithms (MUSIC, ML based). We will also discuss techniques for improving angle resolution through appropriate array design.  In this context we will survey the concepts of  Active beam-forming, passive beam-forming and   synthetic aperture radar. The second part of the talk will focus on angle estimation specifically in the context of low cost FMCW radar systems. The interdependence between angle resolution, range resolution and velocity resolution will be discussed. We will discuss the open challenges in achieving acceptable angle resolution with a minimum number of non-ideal antennas. These challenges span multiple areas : antenna design, angle resolution algorithms, blind and non-blind calibration techniques, detection algorithms and sensor fusion. Application specific challenges will also be discussed.

ABOUT THE SPEAKER:
Sandeep Rao is with the systems group in the wireless division at Texas Instruments and currently looks at system and algorithmic aspect of automotive radars. Earlier he led the GPS positioning algorithm development effort at TI. Prior to TI, he was with Hughes Network Systems where he was involved in the development of modem algorithms for satellite communications. He has a Masters from the University of Maryland and a Bachelors from the Indian Institute of Technology Madras.

All are invited

SPECIAL LECTURE: May 26th, 2014 : Frequency modulated continuous wave radars: an introduction and mm-wave chip design challenges

Wednesday, February 11th, 2015

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2014 – SPECIAL LECTURE – 1

———————————————————————————————-

Title:  Frequency modulated continuous wave radars: an introduction and mm-wave chip design challenges
Date: Monday, May 26th, 2014
Time: 4pm
Tea/coffee: 3.45pm
Venue: Golden Jubilee Hall, ECE Department, IISc Bangalore
Speaker: Karthik Subburaj, Texas Instruments India
 
Abstract :
Frequency Modulated Continuous Wave RADAR systems are popular in automotive and industrial applications to measure relative distance, velocity and direction of surrounding objects. In the first part of this talk, the signal processing fundamentals of a few RADAR methods, starting from Doppler RADAR and leading up to the modern FMCW RADARs are introduced. In the latter part of the talk, various challenges in implementing them using Integrated Circuits and in improving performance wrt range, accuracy and false object detection are discussed. The challenges include RF and analog non-idealities such as noise and non-linearity, and simultaneous operation of transmit and receive antennas and solving them is crucial in realizing a high performance radar system. The contents of this talk are a mix of the fields of RF system design and basic signal processing.

ABOUT THE SPEAKER :
Karthik Subburaj is with the systems group in wireless division at Texas Instruments, currently focusing on signal processing algorithms and RF system budgeting for radar systems. He has earlier worked on PHY layer signal processing and digital design for GNSS receivers, FM transceivers, Timing Recovery in High Speed Serial Interfaces and digital PLLs. He holds an M.S degree from University of Florida, Gainesville and B.E from College of Engineering, Guindy.

All are invited

Talk: January 9, 2015: Segregating Complex Sound Sources Through Temporal Coherence

Wednesday, February 11th, 2015

Dept. of Electrical Engineering, IISc, invites you to the following talk
related to using auditory processing insights for segregating sound
sources.
============================
Title: Segregating Complex Sound Sources Through Temporal Coherence
Speaker: Prof. Shihab A. Shamma,
Univ. of Maryland, USA and Ecole Normale Superieure, Paris
Time & Date: 4:00 pm; Friday January 9, 2015
Venue: Multimedia Classroom (PE 217), Department of Electrical
Engineering, IISc.

Abstract: A new approach for the segregation of monaural sound mixtures
is presented based on the principle of temporal coherence and using
auditory cortical representations. Temporal coherence is the notion that
perceived sources emit coherently modulated features that evoke
highly-coincident neural response patterns. By clustering the feature
channels with coincident responses and reconstructing their input, one can
segregate the underlying source from the simultaneously interfering
signals that are uncorrelated with it. The proposed algorithm requires no
prior information or training on the sources. It can, however, gracefully
incorporate cognitive functions and influences such as memories of a
target source or attention to a specific set of its attributes so as to
segregate it from its background. Aside from its unusual structure and
computational innovations, the proposed algorithmic model provides
testable hypotheses of the physiological mechanisms of this ubiquitous and
remarkable perceptual ability, and of its psychophysical manifestations in
navigating complex sensory environments.

Biography of the Speaker:
Shihab Shamma received his B.S. degree in 1976 from Imperial College, in
London, U.K. He received his M.S. and Ph.D. degrees in Electrical
Engineering from Stanford University in 1977 and 1980, respectively. Dr.
Shamma received his M.A. in Slavic Languages and Literature in 1980 from
the same institution. Dr. Shamma has been a member of the University of
Maryland faculty since 1984 when he started as an Assistant Professor in
the Electrical Engineering Department. He became an Associate Professor in
1989. He has been associated with the Systems Research Center since its
inception in 1985, and received a joint appointment in 1990. Dr. Shamma
also holds a joint appointment with the University of Maryland Institute
for Advanced Computer Studies. He is a fellow of the Acoustical Society of
America since 2004. He received the ISR Outstanding Systems Engineering
Faculty Award from University of Maryland in the year 2007. His research
interests include representation of the acoustic signal at various levels
in mammalian auditory systems and ranges from theoretical models of
auditory processing in early and central auditory stages, to
neuro-physiological investigations of the auditory cortex, to
psycho-acoustical experiments of human perception of acoustic spectral
profiles.

Dr. Shamma’s research deals with issues in computational neuroscience,
neuromorphic engineering, and the development of micro sensor systems for
experimental research and neural prostheses. Primary focus has been on
studying the computational principles underlying the processing and
recognition of complex sounds (speech and music) in the auditory system,
and the relationship between auditory and visual processing. Signal
processing algorithms inspired by data from neuro-physiological and psycho
acoustical experiments are being developed and applied in a variety of
systems such as speech and voice recognition and diagnostics in industrial
manufacturing. Other research interests included (at various times) the
development of photo-lithographic micro-electrode arrays for recording and
stimulation of neural signals, a VLSI implementations of auditory
processing algorithms, and development of robotic systems for the
detection and tracking of multiple sound sources.
===================================
Co-sponsor:
IEEE Signal Processing Society
Bangalore chapter
===================================

Talk: September 8, 2014 : Time-Frequency Modulation Analysis of Autoregressive Spectrogram Models

Wednesday, February 11th, 2015

IEEE Signal Processing Society, Bangalore Chapter, IEEE Bangalore Section and
Department of Electrical Engineering, IISc

invite you to a talk by
Dr. Sriram Ganapathy
IBM T.J. Watson Research Center,
Yorktown Heights, USA

on
Time-Frequency Modulation Analysis of Autoregressive Spectrogram Models

Time & Date: 4:00pm; Monday September 8, 2014
Venue: Multimedia Classroom (PE 217), Department of Electrical Engineering, IISc.

Abstract
Even with several advancements in the practical application of speech technology, the performance of speech systems remain fragile in high levels of noise and other environmental distortions. On the other hand, various studies on the human auditory system have shown good resilience of the system to high levels of noise and degradations. This information shielding property of the auditory system may be largely attributed to the signal peak preserving functions performed by the cochlea and the spectro-temporal modulation filtering
performed by the cortical stages. In this talk, I will discuss how these steps can be efficiently emulated in the front-end signal processing of automatic speech systems. A spectrographic representation that emphasizes the high energy regions is derived using time-frequency
autoregressive modeling followed by a modulation filtering step to preserve only the key spectro-temporal modulations. This talk will conclude with experimental results from several interesting speech applications.

Biography of the Speaker
Sriram Ganapathy is a research staff member at the IBM T.J. Watson Research Center, Yorktown Heights, USA, where he is working on signal analysis methods for radio communication speech in highly degraded environments. He received his Doctor of Philosophy from the Center for Language and Speech Processing, Johns Hopkins University in 2011 with Prof. Hynek Hermansky. He obtained his Bachelor of Technology from College of Engineering, Trivandrum, India in 2004 and Master of Engineering from the Indian Institute of Science, Bangalore in 2006. In the past, he has also been part of the Idiap Research Institute in Martigny, Switzerland from 2006 to 2008 contributing to various speech and Audio projects. His research interests include signal processing, machine learning and robust methodologies for speech and speaker recognition. He has secured over 50 publications in leading international journals and conferences in speech and audio processing along with several patents.