September 8th, 2019

Dept. of Electrical Communication Engineering and


IEEE Signal Processing Society Bangalore Chapter


invite you to the following seminar:


Title: Large Scale Data Analytics for Airborne Imagery

Speaker: Prof. Gaurav Sharma, Univ. of Rochester

Time and Date: 11 AM, September 10, 2019

Venue: Golden Jubilee Hall, ECE



 The widespread availability of high resolution aerial imagery covering wide geographical areas is spurring a revolution in large scale visual data analytics. Specifically, modern aerial wide area motion imagery (WAMI) platforms capture large high resolution at rates of 1-3 frames per second. The sequences of images, which individually span several square miles of ground area, represent rich spatio-termporal datasets that are key enablers for new applications. The effectiveness of such analytics can be enhanced by combining WAMI with alternative sources of rich geo-spatial information such as road maps or prior georegistered images. We present results from our recent research in this area covering three topics. First, we describe a novel method for pixel accurate, real-time registration of vector roadmaps to WAMI imagery based on moving vehicles in the scene. Next, we present a framework for tracking WAMI vehicles across multiple frames by using the registered roadmap and a new probabilistic framework that allows us to better estimate associations across multiple frames in a computationally tractable algorithm. Finally, in the third part, we highlight, how we can combine structure from motion and our proposed registration approach to obtain 3D georegistration for use in application such as change detection. We present results on multiple WAMI datasets, including nighttime infrared WAMI imagery, highlighting the effectiveness of the proposed methods through both visual and numerical comparisons.

Speaker Biography

 Gaurav Sharma is a professor in the Electrical and Computer Engineering Department and a Distinguished Researcher in Center of Excellence in Data Science (CoE) at the Goergen Institute for Data Science at the University of Rochester. He received the PhD degree in Electrical and Computer engineering from North Carolina State University, Raleigh in 1996. From 1993 through 2003, he was with the Xerox Innovation group in Webster, NY, most recently in the position of Principal Scientist and Project Leader. His research interests include data analytics, cyber physical systems, signal and image processing, computer vision, and media security; areas in which he has 52 patents and has authored over 200 journal and conference publications. He currently serves as the Editor-in-Chief for the IEEE Transactions on Image Processing. From 2011 through 2015, he served as the Editor-in-Chief for the Journal of Electronic Imaging and, in the past, has served as an associate editor for the Journal of Electronic Imaging, the IEEE Transactions on Image Processing, and for the IEEE Transactions on Information Forensics and Security. He is a member of the IEEE Publications, Products, and Services Board (PSPB) and chaired the IEEE Conference Publications Committee in 2017-18. He is the editor of the Digital Color Imaging Handbook published by CRC press in 2003. Dr. Sharma is a fellow of the IEEE, a fellow of SPIE, a fellow of the Society for Imaging Science and Technology (IS&T) and has been elected to Sigma Xi, Phi Kappa Phi, and Pi Mu Epsilon. In recognition of his research contributions, he received an IEEE Region I technical innovation award in 2008.

September 3rd, 2019


Department of ECE, Indian Institute of Science

IEEE Bangalore Section

IEEE Signal Processing Society Bangalore Chapter

Welcome you to a 

Special Lecture


Title:  Brain Inspired Automated Concept and Object Learning: Vision, Text, and Beyond

Speakers: Vwani Roychowdhury (UCLA) and 

Thomas Kailath (Stanford) 

Venue: ECE Golden Jubilee Seminar Hall

             Department of  ECE, IISc

Day/ Date: Friday, 6 September 2019

Time: 3-5 pm

High Tea at 5pm


Brains are endowed with innate models that can learn effective informational and reasoning prototypes of the various objects and concepts in the real world around us. A distinctive hallmark of the brain, for example, is its ability to automatically discover and model objects, at multi-scale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various non-ideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. The first part of this work will focus on our recent work that leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) composed of parts, their different configurations and views, and their spatial relationships. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views. The second part of this work will focus on processing large scale textual data, wherein our algorithms can create semantic concept-level maps from unstructured data sets. Finally we will conclude with the outlines of a general framework of contextual unsupervised learning that can remove many of the scalability and robustness limitations of existing supervised frameworks that require large amounts of labeled training sets and mostly act as impressive memorization engines. 

Short Bio:

Vwani Roychowdhury is a Professor of Electrical and Computer Engineering at UCLA and received his BTech and PhD degrees in Electrical Engineering from IIT Kanpur and Stanford University, respectively. Prof. Roychowdhury’s expertise lies in combining tools from a number of disciplines, including computer science, engineering, information theory, mathematics, and physics, and solving fundamental problems in multiple disciplines. His research interests have spanned a diverse set of topics related to combinatorics and theoretical computer science, Artificial Neural Networks, nanoelectronics and device modeling, quantum computing, quantum information and cryptography, the physics of information processing and computation, Bioinformatics, and more recently, brain inspired machine learning and brain modeling. He has published more than 250 journal and conference papers, and coauthored several books. He has mentored more than 25 Ph.D. students and 20 post-doctoral fellows and is always seeking collaborations with problem solvers and seekers. He also cofounded four silicon valley startups; one of these,, was founded in Jan. 2017, pioneered the unsupervised distillation of Concept Graphs from billions of documents, raised upwards of 45M US dollars in investment and was acquired in February 2017. 

Short Bio of Prof. Kailath:

Thomas Kailath received a B.E. (Telecom) degree in 1956 from the College of Engineering, Pune, India, and S.M. (1959) and Sc.D. (1961) degrees in electrical engineering from the Massachusetts Institute of Technology. He then worked at the Jet Propulsion Labs in Pasadena, CA, before being appointed to Stanford University as Associate Professor of Electrical Engineering in 1963. He was promoted to Professor in 1968, and appointed as the first holder of the Hitachi America Professorship in Engineering in1988. He assumed emeritus status in 2001, but remains active with his research and writing activities. He also held shorter-term appointments at several institutions around the world. 

His research and teaching have ranged over several fields of engineering and mathematics: information theory, communications, linear systems, estimation and control, signal processing, semiconductor manufacturing, probability and statistics, and matrix and operator theory. He has also co-founded and served as a director of several high-technology companies. He has mentored an outstanding array of over a hundred doctoral and postdoctoral scholars. Their joint efforts have led to over 300 journal papers, a dozen patents and several books and monographs, including the major textbooks: Linear Systems (1980) and Linear Estimation (2000). 

He received the IEEE Medal of Honor in 2007 for “exceptional contributions to the development of powerful algorithms for communications, control, computing and signal processing.” Among other major honors are the Shannon Award of the IEEE Information Theory Society; the IEEE Education Medal and the IEEE Signal Processing Medal; the 2009 BBVA Foundation Prize for Information and Communication Technologies; the Padma Bhushan, India’s third highest civilian award; election to the U.S. National Academy of Engineering, the U.S. National Academy of Sciences, and the American Academy of Arts and Sciences; foreign membership of the Royal Society of London, the Royal Spanish Academy of Engineering, the Indian National Academy of Engineering, the Indian National Science Academy, the National Academy of Sciences,India, the Indian Academy of Sciences, and TWAS (The World Academy of Sciences). 

In November 2014, he received the 2012 US National Medal of Science from President Obama “for transformative contributions to the fields of information and system science, for distinctive and sustained mentoring of young scholars, and for translation of scientific ideas into entrepreneurial ventures that have had a significant impact on industry.”

July 31st, 2019

IEEE Signal Processing Society Bangalore Chapter invites you to the following event:




Presented by     Harsha Kikkeri

                             CEO, Holosuit Pte Ltd, Mysore

Venue:               ECE Dept, Golden Jubilee Hall, Indian Institute of Science

Date/Time:        Friday, 2nd August 2019, 2-6 pm


Program :

1:30-2pm –         On the spot Registration (if available)

2-3 pm –             Talk:   How to build a Humanoid Robot and Control it

3-3:15 pm –       Q & A

3:15 – 3:45pm   XPrize Humanoid $10 million challenge Presentation

3:45-4:15pm –  Q & A

4:15-4:30          Tea break

4:30-6pm          Workshop on Robots and Virtual Reality and discussion.

Please register here


Abstract: How to build a Humanoid Robot and Control it?

Building Humanoid Robots is the holy grail of robotics. From Honda’s Asimov, to Boston Dynamics Atlas there have been various attempts to build general purpose humanoid robots. This talk will explore various challenges involved in building an end-to-end humanoid robot and will showcase some of the solutions to some of these challenges. Some of the challenges this talk will focus on will be building the hardware in the form factor with the accuracy required without making it cost prohibitive, while still able to control it. The kind of simulation and learning involved in mapping and navigating unpredictive terrain, grasping and manipulating objects using hands, bipedal motion and control, environmental understanding, task planning, learn by demonstration capabilities, the underlying open sources tools and hardware sensors/actuators which are required for a Humanoid Robot. There will be a workshop later which will showcase some robots.


Harsha Kikkeri

Harsha Kikkeri is the CEO of Holosuit pte Ltd where he is building HoloSuit – An AI enabled full body analytics platform which acts as a virtual trainer for your body. He has over 18 years experience working on IoT, augmented/virtual reality, aerial and ground robots with expertise in drones, sensor fusion and machine learning. He did pioneering research at Microsoft Robotics in USA building robots which could learn by demonstration. He has won numerous leadership awards including Gold Star from Microsoft, Excellence Award from Infosys, Bharat Petrleoum Scholarship and has won numerous chess tournaments. He has Masters in Electrical Engineering from Syracuse University, NY and BE Electronics from SJCE, Mysore, India. He holds 44 international patents from US, Europe, China, Japan and other countries. He is a TedX speaker. He had the honor of working under Dr TVSreenivas at IISc which led him into the field of digital signal processing and eventually into Robotics.

June 13th, 2019


Department of Electrical Engineering (EE), IISc and IEEE SPS Bangalore Chapter invite you to the following seminar:

Title: “Physics-Based Vision and Learning ”
(Joint Work with Yunhao Ba and Guangyuan Zhao)

Speaker: Dr. Achuta Kadambi

Achuta Kadambi

Host faculty : Dr. Chandra Sekhar Seelamantula

Time & Venue : 14th June, 2019, Friday on 4:00 PM at MMCR first floor, Electrical Engineering Department, IISc.

Abstract: Today, deep learning is the de facto approach to solving many computer vision problems. However, in adopting deep learning, one may overlook a subtlety: the physics of how light interacts with matter. By exploiting these previously overlooked subtleties, we will describe how we can rethink the longstanding problem of 3D reconstruction. Using the lessons learned from this prior work, we will then discuss the future symbiosis between physics and machine learning, and how this fusion can transform many application areas in imaging.

Biography: Achuta Kadambi is an Assistant Professor of Electrical and Computer Engineering at UCLA, where he directs the Visual Machines Group. The group blends the physics of light with artificial intelligence to give the gift of sight to robots. Achuta received his BS from UC Berkeley and his PhD from MIT, completing an interdepartmental doctorate between the MIT Media Lab and MIT EECS. Please see his group web page for research specifics:

March 8th, 2019

Department of Electrical Communication Engineering (ECE), IISc


IEEE SPS Bangalore Chapter

invite you to the following seminar:


Title: Coherence Diversity in Multi-User Networks

Speaker:  Prof. Aria Nosratinia
Electrical Engineering Department
University of Texas at Dallas

Date/Time: 18 March 2019, 4 PM

Venue: Golden Jubilee Seminar Hall, Dept. of ECE

Although links in a wireless network may easily experience different
coherence conditions, the literature in communication and information
theory has mostly concentrated on coherence intervals of equal length
throughout the network. This talk explores new and exciting developments
in the field of non-uniform fading dynamics, where the disparity of
fading intervals can lead to new gains in multi-user networks that are
distinct from previously known phenomena. Product superposition, a new
tool developed to address non-uniform dynamics, will be introduced.
We begin by studying the application of this tool in the 2-user
broadcast channel. The results will be extended to the multi-user
broadcast channel. Disparity in either coherence bandwidth, or both
coherence time & bandwidth, will be discussed. Time permitting, the
interplay with non-uniform or stale CSI, and the interactions of
product superposition with retrospective interference alignment will
be discussed.

Speaker’s Bio:
Aria Nosratinia is Erik Jonsson Distinguished Professor and
associate head of the Electrical Engineering Department at the
University of Texas at Dallas. He received his Ph.D. in Electrical
and Computer Engineering from the University of Illinois at
Urbana-Champaign in 1996. He has held visiting appointments at
Princeton University, Rice University, and UCLA.  His interests
lie in the broad area of information theory and signal processing,
with applications in wireless communication. Dr. Nosratinia is a
fellow of IEEE for contributions to multimedia and wireless
communications. He has served as editor and area editor for the
IEEE Transactions on Wireless Communications, and editor for the
IEEE Transactions on Information Theory, IEEE Transactions on
Image Processing, IEEE Signal Processing Letters, IEEE Wireless
Communications, and Journal of Circuits, Systems, and Computers.
He has received the National Science Foundation career award, and
the outstanding service award from the IEEE Signal Processing
Society, Dallas Chapter. He has served on the organizing committees
and technical program committees for a number of conferences, most
recently as the general co-chair of ITW 2018. He was named a highly
cited researcher by Clarivate Analytics (formerly Thomson Reuters).

February 20th, 2019


Department of Computational and Data Sciences and IEEE SP Bangalore Chapter invite you for the following seminar

SPEAKER  : Dr. Mayank Vatsa and Dr. Richa Singh
TITLE         : Role and Strengths of Adversarial Perturbations in Deep Learning
Date/Time   : February 21, 2019 (Thursday) 11:00 AM
Venue          : 102 CDS Seminar Hall.


Deep neural network architecture based models have high expressive power and learning ca-pacity. Due to several advancements, deep learning based models have shown very high accuracies on challenging databases including face databases. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of repre-sentation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities.

Adversarial attacks on automated classification systems has been an area of interest for a long time. In 2002, Ratha et al. proposed eleven points of attacks on a biometric/face recognition system. For in-stance, an adversary can operate at the input/image level or the decision level, and lead to incorrect face recognition results. The research on adversarial learning for attacking face recognition systems has three key components: (i) creating adversarial images, (ii) detecting whether an image is adversely altered or not, and (iii) mitigating the effect of the adversarial perturbation process. These adversaries create dif-ferent kinds of effect on the input and detecting them requires the application of a combination of hand-crafted as well as learnt features; for instance, some of the existing attacks can be detected using prin-cipal components while some hand-crafted attacks can be detected using well defined image processing operations. Therefore, it is important to detect the adversarial perturbations and mitigate the effect caused due to such adversaries using ensemble of defense algorithms. While majority of the research in adversarial perturbations focus on attacking deep learning models, in this talk, we will also connect how adversarial perturbations can be used for building Trusted-AI systems. With two threads on this direc-tion, we will discuss privacy preserving applications in faces as well as a novel concept of Data Fine-tuning.


Mayank Vatsa received the M.S. and Ph.D. degrees in computer science from West Virginia University, USA, in 2005 and 2008, respectively. He is currently the Head of the Infosys Center for Artificial Intelli-gence, an Associate Professor with the IIIT-Delhi, India, and an Adjunct Associate Professor with West Virginia University, USA. He has co-edited a book Deep learning in Biometrics and co-authored over 250 research papers. His areas of interest are biometrics, image processing, machine learning, computer vi-sion, and information fusion. He is a Senior Member of IEEE and ACM. He was a recipient of A. R. Krish-naswamy Faculty Research Fellowship at the IIIT-Delhi, the FAST Award Project by DST, India, and several Best Paper and Best Poster Awards at international conferences. He is also the recipient of the prestigious Swarnajayanti fellowship award from Government of India. He is an Area Chair of the Information Fusion (Elsevier), General Co-Chair of IJCB 2020, and the PC Co-Chair of the ICB 2013 and IJCB 2014. He has served as the Vice President (Publications) of the IEEE Biometrics Council where he started the IEEE Transactions on Biometrics, Behavior, And Identity Science.


Richa Singh received the Ph.D. degree in computer science from West Virginia University, Morgantown, USA, in 2008. She  is  currently an  Associate Dean of  Alumni and  Communications,  an Associate Professor with the IIIT-Delhi, India, and an Adjunct Associate Professor with West Virginia University. She has co-edited book Deep Learning in Biometrics and has delivered tutorials on deep learning and domain adap-tation in ICCV 2017, AFGR 2017, and IJCNN 2017. Her areas of interest are pattern recognition, machine learning, and biometrics. She is a fellow of IAPR and a Senior Member of IEEE and ACM. She was a recipient of the Kusum and Mohandas Pai Faculty Research Fellowship at the IIIT-Delhi, the FAST Award by the Department of Science and Technology, India, and several best paper and best poster awards in interna-tional conferences. She has also served as the Program Co-Chair of BTAS 2016 and IWBF 2018, and a General Co-Chair of ISBA 2017. She is currently serving as a Program Co-Chair of AFGR 2019 and IJCB 2020. She is serving as the Vice President (Publications) of the IEEE Biometrics Council. She is an Editorial Board Member of Information Fusion (Elsevier), an Associate Editor of Pattern Recognition, Computer Vision and Image Understanding, and the EURASIP Journal on Image and Video Processing (Springer).

February 13th, 2019


Department of Computational and Data Sciences and IEEE SP Bangalore Chapter invite you for the following seminar

SPEAKER  : Dr. Shirin Dora, Post Doc

TITLE         : Multisensory Integration in the Brain

Date/Time   : February 14, 2019 (Thursday) 04:00 PM

Venue          : 102 CDS Seminar Hall.


Multisensory integration is a phenomenon by which the brain infers coherent and robust representations using incoming sensory information in different modalities. It plays a significant role in perception as well as all cognitive functions from memory to decision-making. Because of its inherent multimodal nature, it is harder to study in experiments and many open questions exist regarding its underlying neural mechanisms. In my research, I approach this problem from two different perspectives using computational models in conjunction with experimental data. In the first method, I focus on understanding the neurobiological mechanisms that might support multisensory integration. I will show that deep neural networks trained using predictive coding can account for neuronal properties like selectivity and sparsity along the visual cortical hierarchy. In the second method focus shifts to identifying the underlying structures necessary for multisensory integration rather than the intermediate mechanisms that yield these structures


Shirin Dora completed his PhD in machine learning from Nanyang Technological University in Singapore. His research focused on developing biologically plausible learning approaches for spiking neural networks. During his PhD, he developed a keen interest in the mechanisms of perception and cognition in the brain. This led him to pursue a post-doctoral research in computational neuroscience in the cognitive and systems neuroscience group at the University of Amsterdam. In his postdoctoral research, he collaborates with experimentalists in building models of perception and multisensory integration in the brain.



                                                               ALL ARE WELCOME

December 22nd, 2018
November 28th, 2018

The IEEE Signal Processing Society, Bangalore Chapter


Department of Electrical Engineering, Indian Institute of Science


cordially invite you to an IEEE Distinguished Lecture on

Computational Imaging with Few Photons, Electrons, or Ions


Speaker: Prof. Vivek Goyal, Boston University


Date and Time: December 3, 2018, 11 AM to 12 noon (coffee/tea:
10.45 AM)


Venue: Multimedia Classroom, Electrical Engineering Department, IISc.



LIDAR systems use single-photon detectors to enable long-range reflectivity and depth imaging.  By exploiting an inhomogeneous Poisson process observation model and the typical structure of natural scenes, first-photon imaging demonstrates the possibility of accurate LIDAR with only 1 detected photon per pixel, where half of the detections are due to (uninformative) ambient light.  I will explain the simple ideas behind first-photon imaging. Then I will touch upon related subsequent works that mitigate the limitations of detector arrays, withstand 25-times more ambient light, allow for unknown ambient light levels, and capture multiple depths per pixel. The philosophy of modeling at the level of individual particles is also at the root of current work in focused ion beam microscopy.


Related paper DOIs:









Speaker biography:

Vivek Goyal received the M.S. and Ph.D. degrees in electrical engineering from the University of California, Berkeley, where he received the Eliahu Jury Award for outstanding achievement in systems, communications, control, or signal processing.  He was a Member of Technical Staff at Bell Laboratories, a Senior Research Engineer for Digital Fountain, and the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering at MIT. He was an adviser to 3dim Tech, winner of the 2013 MIT $100K Entrepreneurship Competition Launch Contest Grand Prize, and consequently with Nest Labs 2014-2016.  He is now an Associate Professor of Electrical and Computer Engineering at Boston University.


Dr. Goyal is a Fellow of the IEEE.  He was awarded the 2002 IEEE Signal Processing Society (SPS) Magazine Award, the 2017 IEEE SPS Best Paper Award, an NSF CAREER Award, and the Best Paper Award at the 2014 IEEE International Conference on Image Processing.  Work he supervised won student best paper awards at the IEEE Data Compression Conference in 2006 and 2011, the IEEE Sensor Array and Multichannel Signal Processing Workshop in 2012, and the IEEE International Conference on Imaging Processing in 2018 as well as five MIT thesis awards.  He currently serves on the Editorial Board of Foundations and Trends and Signal Processing, the IEEE SPS Computational Imaging SIG, and the IEEE SPS Industry DSP TC. He previously served on the Scientific Advisory Board of the Banff International Research Station for Mathematical Innovation and Discovery, as Technical Program Committee Co-chair of Sampling Theory and Applications 2015, and as Conference Co-chair of the SPIE Wavelets and Sparsity conference series 2006-2016.  He is a co-author of Foundations of Signal Processing (Cambridge University Press, 2014).

November 1st, 2018


Department of Electrical Engineering
IEEE Signal Processing Society Bangalore Chapter

invite you to a talk by

Dr. Emtiyaz Khan, RIKEN Center for Advanced Intelligence Project, Tokyo


Fast and Scalable Estimation of Uncertainty using Bayesian Deep Learning

November 2, 2018; 2.30 PM, Multimedia Classroom, Electrical Engineering department, IISc


Uncertainty estimation is essential to design robust and reliable systems, but this usually requires more effort to implement and execute compared to maximum-likelihood methods. In this talk, I will summarize some of our recent work that enables fast and scalable estimation of uncertainty using deep models, such as Bayesian neural network. The main feature of our method is that they are extremely easy to implement within existing deep-learning softwares. I will also summarize some of the current challenges faced by the Bayesian deep-learning community and how real-world applications can be useful for our research.

Joint work with Wu Lin (UBC), Didrik Nielsen (RIKEN), Voot Tangkaratt (RIKEN), Yarin Gal (UOxford), Akash Srivastva (UEdinburgh), Zuozhu Liu (SUTD).

About the speaker:
Dr. Emtiyaz Khan is a team leader (equivalent to Full Professor) at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference (ABI) Team. Since April 2018, he is a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT) and also a part-time lecturer at Waseda University.

IEEE Signal Processing Society
Bangalore chapter