IEEE
November 22nd, 2017

The IEEE Signal Processing Society, Bangalore Chapter and

The Department of Electrical Communication Engineering, IISc

invite you to the following lecture.

Name of the speaker: Dr. Arun Kumar

Title of the Talk: Acoustic Vector Sensors and related Signal Processing for Air and Underwater Applications

Date and Time: 4th Dec, 2017 at 3PM

Venue: Golden Jubilee Seminar Hall of ECE Department

Abstract

Acoustic emissions from radiating sources or targets, in either air or underwater medium, can be used to detect, localize, and track them passively. The classical method of estimating the Direction of Arrival (DOA) of a radiating acoustic source is to use a spatially distributed array of pressure sensors i.e. hydrophones or microphones. The DOA can also be estimated by collocated measurement of the particle velocity and the pressure of the signal that gives the acoustic intensity vector. An Acoustic Vector Sensor (AVS) consists of three orthogonally oriented velocity sensors and a pressure hydrophone or microphone, all spatially collocated in a point-like geometry. The collocated
measurement viz. the particle velocity that is a vector quantity along with the pressure results in a 4-dimensional vector that is recorded as function of time. This measurement can give improved DOA estimation even with a single AVS or smaller AVS arrays as compared with only pressure sensor arrays requiring comparatively large apertures. By making additional use of the velocity information, systems built around AVS have numerous advantages over pressure sensor arrays.

In the talk, the use of AVS will be motivated, followed by a presentation of the salient features of an AVS, and two methods for measuring the particle velocity. The practical issues related to the two methods of constructing AVS will then be discussed. The work done by our research group at IIT Delhi for the design, development and evaluation of air and underwater acoustic vector sensors and their related signal processing algorithms will also be presented.

Short Biography of Prof. Arun Kumar

Arun Kumar received the B.Tech, M.Tech, and PhD degrees in Electrical Engineering from IIT Kanpur. He was a Visiting Researcher at the University of California, Santa Barbara, USA, for 2 years. Since 1997, he has been with the Centre for Applied Research in Electronics, IIT Delhi, where he is Professor. He was Head of the Centre for 4 years. He also served as Head of Instrument Design and Development Centre of IIT Delhi. His research interests are in digital signal processing, human and machine speech communication technologies, underwater and air acoustics, acoustic imaging, acoustic vector sensors, and multi-sensor data fusion for mobile and wearable devices.

Arun Kumar is an inventor on 8 US patent applications (2 granted and 6 pending). He has published over 100 papers in refereed journals and conferences, and has supervised 55 funded R&D projects. These have led to 20 Technology and know-how transfers. Several of these technologies such as High Speed VLF Communication Modem for communication with submerged submarines are deployed in the field and in practical use.

He has served on several National level committees. He is Editor of the IETE Journal of Research. He is also co-founder and non-executive Director of two technology companies working in the areas of speech technologies and DSP products.

November 20th, 2017

Dear All,

IEEE Signal Processing Society, Bangalore Chapter and

Department of Computational and Data Sciences, Indian Institute of Science

Invite you to the following talk:

Speaker: Prof. MANUEL J. MARON – JIMENEZ

University of Granada, Spain

Title: Towards automatic learning of gait signatures for people identification in video

Time & Date: 11:30 am, Tuesday, Nov 28, 2017

Venue: CDS Seminar Hall (Room No: 102), IISc.

Abstract:

This talk targets people identification in video based on the way they walk (i.e. gait). While classical methods typically derive gait signatures from sequences of binary silhouettes, in this talk we present the use of convolutional neural networks (CNN) for learning high level descriptors from low level motion features (i.e. optical flow components). We carry out a thorough experimental evaluation of the proposed CNN architecture on the challenging TUMGAID dataset. The experimental results indicate that using spatiotemporal cuboids of optical flow as input data for CNN allows to obtain state of the art results on the gait task, with an image resolution eight times lower than the previously reported results (i.e. 80×60 pixels).

In the second part of this talk, we support that, although gait is mainly used for identification, additional tasks as gender recognition or age estimation may be addressed based on gait as well. In such cases, traditional approaches consider those tasks as independent ones, defining separated task specific features and models for them. Our approach shows that by training jointly more than one gait based tasks, the identification task converges faster than when it is trained independently, and the recognition performance of multitask models is equal or superior to more complex single-task ones. Our model is a multitask CNN that receives as input a fixed length sequence of optical flow channels and outputs several biometric features (identity, gender and age).

Finally, we will show preliminary results on multimodal feature fusion, based on CNNs, for improving recognition. In particular, the input sources are gray level pixels, depth maps and optical flow. The experiments show interesting and promising results to continue this line.

Speaker Bio:

MANUEL J. MARON – JIMENEZ received the BSc, MSc and PhD degrees from the University of Granada, Spain. He has worked, as a visiting student, at the Computer Vision Center of Barcelona (Spain), Vislab-ISR/IST of Lisboa (Portugal) and the Visual Geometry Group of Oxford (UK); and, as visiting researcher, at INRIA-Grenoble (Perception team) and the Human Sensing lab (CMU). He is the coauthor of more than 50 technical papers at international venues and serves as a reviewer of top computer vision and pattern recognition journals. Currently, he works as associate professor (tenure position) at the University of Cordoba (Spain). His research interests include object detection, human centric video understanding, visual SLAM and machine learning.

May 30th, 2017

IEEE Signal Processing Society, Bangalore Chapter, IEEE Bangalore Section,
Microsoft Research India, Bangalore,
and
Department of Electrical Engineering, Indian Institute of Science

invite you to a seminar by

Prof. Alan Black,
Language Technologies Institute,
Carnegie Mellon University
USA

on
“Data Driven Conversational Dialog”

Date : Friday, June 02, 2017.
Time : 4:00 PM

Venue : Multimedia Classroom (Room: C241), 1st floor,
Department of Electrical Engineering

Abstract: Historically, successful spoken dialog systems were hand crafted sets of explicit rules that defined a set of paths through potential turns between a user and machine. Although often very successful, these are expensive to develop and require substantial work to expand to new domains. Recently there have been attempts to try to use databases of existing conversations to learn dialog structure thus making the build process easier. There are some successes here, but there are also significant problems. Finding the right data is hard, or may even be impossible, solutions to finding the “right” data has become a research goal in itself. This talk will present the current techniques in statistical and neural conversational models in dialog systems, their successes and their limitations as well as potential research directions to addressing these short comings.

Biography of the speaker: Alan W Black is a Professor in the Language Technologies Institute at
Carnegie Mellon University. He was born in Edinburgh, Scotland, and did his bachelors in Coventry, England, and his masters and doctorate at the University of Edinburgh. Before joining the faculty at CMU in 1999, he worked in the Centre for Speech Technology Research at the University of Edinburgh, and before that at ATR in Japan. He is one of the principal authors of the free software Festival Speech Synthesis System, the FestVox voice building tools and CMU Flite, a small footprint speech synthesis engine, that is the basis for many research and commercial systems around the world. He also works in spoken dialog systems, the LetsGo Bus Information project and mobile speech-to-speech translation systems, and recently doing work in using speech processing techniques for unwritten languages. Prof Black was an elected member of ISCA board (2007-2015). He has over 200 refereed publications and is one of the highest cited authors in his field.

===============
Co-sponsor:
IEEE Signal Processing Society,
Bangalore Chapter
===============

November 21st, 2016

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2016 – SPECIAL LECTURE – 16

———————————————————————————————-

Title: Bandlimited Field Estimation from Samples Recorded by Location-Unaware Sensors
Speaker: Prof. Animesh Kumar, IIT Bombay
Venue: ECE Golden Jubilee Hall, IISc
Date and time: Thursday, 24 Nov. 2016, 4pm (Tea/Coffee at 3.45pm)

Abstract:
Remote sensing with a distributed array of stationary sensors or a mobile sensor has been of great interest. With the advent of Internet of Things (IOTs) for sensing applications in smart cities, the topic at large will become more interesting. This talk will introduce and propose solutions to a fundamental question: can a spatial field be estimated from samples taken at unknown sampling locations?

In this talk, we will discuss recent works where a spatially bandlimited field over a finite support is sampled at unknown locations, and the ensuing set of samples have to be used to estimate the spatially bandlimited field generating them. It is assumed that the unknown sampling locations are obtained by statistical realization of a random process. The statistics of sampling locations is then leveraged to estimate the bandlimited field in question. Two models of sampling locations will be explored: (i) a scattering scenario where sensors are deployed uniformly at random in an interval of interest; and (ii) a mobile sampling scenario where a location-unaware mobile sensor records spatial field values on a renewal process with unknown distribution. In this unknown sampling location setup, a _universal_ estimate for the field will be developed and its mean-squared error (distortion) will be analyzed as a function of the average number of field samples collected (i.e., over!
sampling). In both these sampling models, the effect of additive measurement-noise will also be examined.

Bio:
Animesh Kumar is an Associate Professor in the Department of Electrical Engineering at the Indian Institute of Technology Bombay. He obtained his Ph.D. degree in Electrical Engineering and Computer Sciences from the University of California, Berkeley. He is an Affiliate Member of the IEEE Signal Processing Society SPTM Technical Committee. His current research interests include sampling theory and quantization, statistical and distributed signal processing, and TV white space.
———————————————————————————————-

October 13th, 2016

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY, BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2016 – SPECIAL LECTURE – 8

———————————————————————————————-

Title: Inference through Sparse Sensing
Speaker: Prof. Geert Leus, Delft University of Technology
Venue: ECE Golden Jubilee Hall, IISc
Date and time: 18 Oct. 2016, 3pm (Tea/Coffee at 2.45pm)

Abstract:
Ubiquitous sensors generate prohibitively large data sets. Large volumes of such data are nowadays generated by a variety of applications such as imaging platforms and mobile devices, surveillance cameras, social networks, power networks, to list a few. In this era of data deluge, it is of paramount importance to gather only the data that is informative for a specific task in order to limit the required sensing cost, as well as the related costs of storing, processing, or communicating the data. The main goal of this talk is therefore to present topics that transform classical sensing methods, often based on Nyquist-rate sampling, to more structured low-cost sparse sensing mechanisms designed for specific inference tasks, such as estimation, filtering, and detection. More specifically, we present fundamental tools to achieve the lowest sensing cost with a guaranteed performance for the task at hand. Applications can be found in the areas of radar, multi-antenna communications, remote sensing, and medical imaging.

Bio:
Geert Leus received the M.Sc. and Ph.D. degree in Applied Sciences from the Katholieke Universiteit Leuven, Belgium, in June 1996 and May 2000, respectively. Currently, Geert Leus is an “Antoni van Leeuwenhoek” Full Professor at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology, The Netherlands. His research interests are in the broad area of signal processing. Geert Leus received a 2002 IEEE Signal Processing Society Young Author Best Paper Award and a 2005 IEEE Signal Processing Society Best Paper Award. He is a Fellow of the IEEE and a Fellow of EURASIP. Geert Leus was the Chair of the IEEE Signal Processing for Communications and Networking Technical Committee, and an Associate Editor for the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, the IEEE Signal Processing Letters, and the EURASIP Journal on Advances in Signal Processing. Currently, he is a Member-at-Large to the Board of Governors of the IEEE Signal Processing Society and a member of the IEEE Sensor Array and Multichannel Technical Committee. He finally serves as the Editor in Chief of the EURASIP Journal on Advances in Signal Processing.

ALL ARE WELCOME
———————————————————————————————-

October 8th, 2016

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY BANGALORE CHAPTER

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2016 – IEEE DISTINGUISHED LECTUR

———————————————————————————————-

Dear All:

Title: Learning Sparsifying Transforms for Signal, Image, and Video Processing

Speaker: Prof. Yoram Bresler, Dept. of ECE, University of Illinois at Urbana-Champaign

Time: 1600-1700 hrs (coffee/tea at 3.45pm)

Date: Monday, 17 Oct. 2016

Venue: Golden Jubilee Seminar Hall, Dept. of ECE, Indian Institute of Science Bangalore

Abstract:
The sparsity of signals and images in a certain transform domain or dictionary has been exploited in many applications in signal and image processing, including compression, denoising, and notably in compressed sensing, which enables accurate reconstruction from undersampled data. These various applications used sparsifying transforms such as DCT, wavelets, curvelets, and finite differences, all of which had a fixed, analytical data-independent form.

Recently, sparse representations that are directly adapted to the data have become popular, especially in applications such as image and video denoising and inpainting. While synthesis dictionary learning has enjoyed great popularity and analysis dictionary learning too has been explored, these methods involve a repeated step of sparse coding, which is NP hard, and heuristics for its approximation are computationally expensive. In this talk we describe our work on an alternative approach: sparsifying transform learning, in which a sparsifying transform is learned from data. The method provides efficient computational algorithms with exact closed-form solutions for the alternating optimization steps, and with theoretical convergence guarantees. The method scales better than dictionary learning with problem size and dimension, and in practice provides orders of magnitude speed improvements and better image quality in image processing applications. Variations on the method inc!
lude the learning of a union of transforms, and online versions.

We describe applications to image representation, image and video denoising, and inverse problems in imaging, demonstrating improvements in performance and computation over state of the art methods.

Bio:
Yoram Bresler received the B.Sc. (cum laude) and M.Sc. degrees from the Technion, Israel Institute of Technology, in 1974 and 1981 respectively, and the Ph.D de¬gree from Stanford University, in 1986, all in Electrical Engineering. In 1987 he joined the University of Illinois at Urbana-Champaign, where he currently holds the positions of a GEBI Founder Professor of Electrical and Computer Engineering, and Professor at the department of Bioengineering, and at the Co¬ordinated Science Laboratory. He is also President and Chief Technology Officer at In¬staRecon, Inc., a startup he co-founded to commercialize breakthrough technology for tomographic reconstruction developed in his academic research. His current research in¬terests include multi-dimensional and statistical signal processing and their applications to inverse problems in imaging, and in particular compressed sensing and computed tomography, and machine learning in signal processing.

Dr. Bresler has served on the editorial board of a number of journals, including the IEEE Transactions on Signal Processing, the IEEE Journal on Selected Topics in Signal Processing, Machine Vision and Applications, and the SIAM Journal on Imaging Science, and on various committees of the IEEE. Dr. Bresler is a fellow of the IEEE and of the AIMBE. He received two Best Journal Paper Awards from the IEEE Signal Processing society, and a paper he coauthored with one of his students received the Young Author Award from the same society in 2002. He is the recipient of a 1991 NSF Presidential Young Investigator Award, the Technion (Israel Inst. of Technology) Fellowship in 1995, and the Xerox Senior Award for Faculty Research in 1998. He was named a University of Illinois Scholar in 1999, appointed as an Associate at the Center for Advanced Study of the University in 2001-2, and Faculty Fellow at the National Cener for Supercomputing Ap¬plications (NCSA) in 2006. In 2016 he was a!
ppointed an IEEE Signal Processing Society Distinguished Lecturer.

———————————————————————————————-

ALL ARE WELCOME
———————————————————————————————-

June 29th, 2016

The IEEE Signal Processing Society, Bangalore Chapter and Department of
Electrical Engineering Indian Institute of Science invite you to the
following talk:

Title : Enhancing clinical voice assessment with smartphone-based ambulatory voice monitoring

Speaker: Dr Daryush Mehta
Assistant Biomedical Engineer,
Center for Laryngeal Surgery and Voice Rehabilitation,
Massachusetts General Hospital (MGH), Boston, USA
Time: 11:00 hrs 1 July 2016 Friday
Venue: Multimedia Classroom, Department of Electrical
Engineering , IISc.

Abstract :
An estimated 30% of the adult U.S. population suffers from a voice disorder at some point in their lives and often experience significant communication disabilities with far-reaching social, professional, and personal consequences. Most voice disorders are chronic or recurring conditions and result from inefficient and/or abusive patterns of vocal behavior, termed vocal hyperfunction. Thus, an ongoing clinical research goal is the prevention, diagnosis, and treatment of vocal hyperfunction through noninvasive, long-term monitoring of an individual’s daily voice use. During this talk, I will present my work investigating vocal hyperfunction in voice patients and matched healthy controls using smartphone-based ambulatory voice monitoring. Voice use and vocal function measures were derived from neck-surface acceleration recordings using vocal dose theory and novel impedance-based acoustic modeling to yield glottal airflow estimates. Results indicate that the clinical treatment of vocal hyperfunction would be improved by the ability to unobtrusively monitor and quantify detrimental voice use and simultaneously provide real-time biofeedback, thereby facilitating the learning of healthier vocal behaviors. Future research aims to enhance clinical voice assessment through integrating innovations in wearable sensor technology and laryngeal endoscopic imaging.

Biography:
Daryush Mehta is Assistant Biomedical Engineer at the Center for Laryngeal Surgery and Voice Rehabilitation at the Massachusetts General Hospital (MGH), Instructor in Surgery at Harvard Medical School, and Adjunct Assistant Professor at the MGH Institute of Health Professions. Daryush received his PhD in Speech and Hearing Bioscience and Technology from the Harvard–MIT Division of Health Sciences and Technology (2010), Master’s degree in Electrical Engineering and Computer Science from MIT (2006), and Bachelor’s degree in Electrical Engineering from University of Florida (2003). Daryush’s research interests include high-speed video imaging of vocal vibration, speech signal processing, and clinical voice disorder assessment. Read all about his work at web.mit.edu/dmehta/www.

===================================
Co-sponsor:
IEEE Signal Processing Society
Bangalore chapter
===================================

June 7th, 2016

The IEEE Signal Processing Society, Bangalore Chapter and Department of
Electrical Engineering Indian Institute of Science invite you to the
following talk:

Title : Understanding the Perception and Impact of Social Signals

Speaker: Dr Tanaya Guha
Assistant Professor,
IIT Kanpur
Time: 11:00 hrs 14 June 2016 Tuesday
Venue: Multimedia Classroom, Department of Electrical
Engineering , IISc.

Abstract :
Understanding the diverse behavioral and social patterns that exist around us can improve our social experience and interaction in many ways. With this broader objective in mind, we focus on two important but different problems relevant to social signal processing. The first problem involves social perception and its impact on media. Here, we attempt to quantify a very subjective and often not-so-well-defined concept of gender representation and bias. Starting with content analyzing popular Hollywood movies, we show how gender representation can be objectively measured from multimodal cues. In the next part, we focus on the production and perception of social signals in autism. Our goal is to understand how behavioral signals, such as facial expressions, produced by children with autism are perceived by healthy observers. Methodologies, approaches, results and challenges related to both of these problems will be discussed.

Biography:
Tanaya Guha is currently an Assistant Professor in the department of Electrical Engineering at IIT Kanpur. She is also a part of the computer vision group at IITK. Prior to joining IITK, she was a postdoctoral fellow at the Signal Analysis and Interpretation Lab (SAIL), University of Southern California, Los Angeles from 2013 to 2015. She has received her PhD in Electrical and Computer Engineering from the University of British Columbia, Vancouver in 2013. She was a recipient of Mensa Canada Woodhams memorial scholarship, Google Anita Borg memorial scholarship and Amazon Grace Hopper celebration scholarship. Her current research interests include human emotion and behavior analysis, multimodal signal processing, and image analysis.

===================================
Co-sponsor:
IEEE Signal Processing Society
Bangalore chapter
===================================

May 7th, 2016

IEEE Bangalore Section, IEEE Signal Processing Society Bangalore Chapter and Department of ECE, Indian Institute of Science, Bangalore

Cordially Invite you to a Seminar

Title: “Video action attribute learning using hierarchical subspace clustering.”

Speaker: M R Raghuveer Rao, FIEEE, US Army Research Laboratory

Day/Date/Time: Tue, 10 May 2016, 1600 hrs

Tea: 1545 hrs

Venue: Golden Jubilee Seminar Hall, ECE Department, IISc

Abstract: An approach is presented for unsupervised categorization of human action attributes using a hierarchical union of subspaces model. Semantically meaningful labels can be attached to these learned attributes. Decomposition of actions into a sequence of attributes can potentially form the basis for activity recognition and summarization. The talk provides overview of approach along with examples.

About the Speaker:
Dr. Raghuveer Rao has a B.E. degree in Electronics & Communication Engineering from Mysore University, a M.E. degree in Electrical Communication Engineering from the Indian Institute of Science, and the Ph.D. in Engineering from the University of Connecticut. He was a member of technical staff at AMD Inc. from 1985 to 1987. He joined the Rochester Institute of Technology in 1987 where he was a Professor of Electrical Engineering and a member of the Imaging Science faculty until 2008. Since November 2008, he has been with the Army Research Laboratory in Adelphi, MD where he is currently the Chief of the Image Processing Branch. Dr. Rao has held visiting appointments with the Indian Institute of Science, Princeton University, Air Force Research Laboratory and the Naval Surface Warfare Center. He is an evaluator of electrical engineering for ABET, the leading accreditation body for undergraduate engineering programs. He is a recipient of the IEEE Signal Processing Society Paper Award and an elected fellow of IEEE, and SPIE.

ALL ARE INVITED

April 29th, 2016

———————————————————————————————-

IEEE SIGNAL PROCESSING SOCIETY BANGALORE CHAPTER, IEEE BANGALORE SECTION

&

DEPARTMENT OF ECE, INDIAN INSTITUTE OF SCIENCE

2016 – SPECIAL LECTURE – 4

———————————————————————————————-

Title: Comprehensive Human State Modeling and Its Applications

Speaker: Dr. Ajay Divakaran, SRI International

Time: 1600-1700 (coffee/tea at 3.45pm)

Date: 03 May 2016

Venue: ECE Golden Jubilee Seminar Hall, IISc

Abstract: We present a suite of multimodal techniques for assessment of human behavior with cameras and microphones. These techniques drive the sensing module of an interactive simulation trainer in which the trainee has lifelike interaction with a virtual character so as to learn social interaction. We recognize facial expressions, gaze behaviors, gestures, postures, speech and paralinguistics in real-time and transmit the results to the simulation environment which reacts to the trainee’s behavior in a manner that serves the overall pedagogical purpose. We will describe the techniques developed and results, comparable to or better than the state of the art , obtained for each of the behavioral cues, as well as identify avenues for further research. Behavior sensing in social interactions poses a few key challenges for each of the cues including the large number of possible behaviors, the high variability in execution of the same behavior within and across individuals and real-time execution. Furthermore, we have the challenge of appropriate fusion of the multimodal cues so as to arrive at a comprehensive assessment of the behavior at multiple time scales. We will also discuss our approach to social interaction modeling using our sensing capability to monitor and model dyadic interactions. We will present a video of the demonstration of the end to end simulation trainer.

About the Speaker:

Ajay Divakaran, Ph.D., is a Program Director and leads the Vision and Multi-Sensor group in SRI International’s Vision and Learning Laboratory. Divakaran is currently the principal investigator for a number of SRI research projects. His work includes multimodal modeling and analysis of affective, cognitive, and physiological aspects of human behavior, interactive virtual reality-based training, tracking of individuals in dense crowds and multi-camera tracking, technology for automatic food identification and volume estimation, and audio analysis for event detection in open-source video. He has developed several innovative technologies for multimodal systems in both commercial and government programs during the course of his career. Prior to joining SRI in 2008, Divakaran worked at Mitsubishi Electric Research Labs for 10 years, where he was the lead inventor of the world’s first sports highlights playback-enabled DVR. He also oversaw a wide variety of product applications for machine learning. Divakaran was named a Fellow of the IEEE in 2011 for his contributions to multimedia content analysis. He developed techniques for recognition of agitated speech for his work on automatic sports highlights extraction from broadcast sports video. He established a sound experimental and theoretical framework for human perception of action in video sequences as lead-inventor of the MPEG-7 video standard motion activity descriptor. He serves on Technical Program Committees of key multimedia conferences, and served as an associate editor of IEEE Transactions on Multimedia from 2007 to 2010. He has authored two books and has more than 100 publications to his credit, as well as more than 40 issued patents. He was a research associate at the ECE Dept, IISc from September 1994 to February 1995. He was a scientist with Iterated Systems Incorporated, Atlanta, GA, from 1995 to 1998. Divakaran received his M.S. and Ph.D. degrees in electrical engineering from Rensselaer Polytechnic Institute. His B.E. in electronics and communication engineering is from the University of Jodhpur in India.

—————————————————————————