Sponsored by the Schools of Engineering and Computer Science Smaller imager of the University of Galway logo

The following are the Special Session Proposals accepted for IMVIP 2023.

Solutions for Machine Vision Data Privacy Challenge

In today’s world of Smart-Homes and Cities we are constantly being observed by Machine Vision systems. But increasingly data privacy regulations are being applied which restrict the potential to apply new computer vision technologies. In part this is due to the broad interpretation of personally identifiable data (PID) by the EU GDPR regulations. In a nutshell, any biometric that can be determined from image or video data is considered as PID. Finding practical and working solutions to the Data Privacy challenge are very important and this conference session plans to explore some new technology solutions.

Contact Person(s) 
Prof. Peter Corcoran

Example research topics suitable for inclusion in this special session include (but are not limited to):

  • Privacy by design approaches for Machine Vision
  • In-Device Machine Vision/Embedded Edge-AI Algorithms
  • Novel multi-spectral Machine Vision technique
  • Neuromorphic Imaging and Event Cameras applied for Sensing Applications
  • Biometric detection combined with obfuscation or annonymization algorithms
  • Authentication algorithms combined with a privacy management framework
  • Privacy for Machine Vision in the Smart-Home/Smart-City
  • Application specific use cases:
    • Privacy in Medical Imaging
    • Driver Monitoring/Drowsiness Sensing
    • Privacy and Sousveillance

Exploring the Role of Deep Learning in Medical Image Processing

Medical image processing is a rapidly growing and evolving area of computer vision, with many exciting developments and applications emerging in recent years. It plays a crucial role in the diagnosis, monitoring, and treatment of a wide range of diseases and conditions. In recent years, Machine Learning (ML) techniques such as Deep Learning (DL) proved their efficiency on analysing medical imaging data (e.g., X-rays, CT scans, and MRI scans), leading to better diagnosis, treatment, and patient outcomes.
This special session will bring together experts from both machine learning and medical imaging fields to discuss the latest research findings, challenges, and opportunities in this exciting area. This special session will also include works using data augmentation techniques to remediate to the lack of labelled medical data that are necessary for the training of ML models.

Contact Person(s)
Dr. Malika Bendechache
Dr. Hossein Javidnia

Potential topics include but are not limited to the following:

  • Medical image processing
  • Application of Machine Learning (ML) and Deep Learning (DL) to medical imaging
  • ML for medical image processing and segmentation
  • DL for medical image registration and fusion
  • Transfer learning for medical image analysis
  • Synthetic medical image generation
  • The role of synthetic data in medical image analysis
  • DL for real-time medical image segmentation and classification
  • DL for multi-modal medical image analysis
  • Addressing class imbalance in medical image analysis using ML techniques
  • Computer-aided diagnosis for medical conditions
  • Data augmentation
  • Medical data augmentation using generative adversarial networks (GANs)
  • DL for medical data visualization and exploration
  • Personalized medicine: DL for patient stratification and treatment selection

AR/VR technologies for Healthcare, Older Adult Care and Behavioural Analysis/Interventions

In this Special Session we seek to explore the potential for use of AR/VR technologies to support novel healthcare therapies and interventions. Examples include the use of AR/VR environments to provide memory therapies for older adults or the benefits of AR/VR Exergames for physical therapies. We welcome other novel uses of AR/VR technology for healthcare and behavioural intervention studies and novel therapies in the fields of healthcare and psychology. Please contact the special session chairs for additional information or to discuss your submission to this special session

Contact Person(s)
Dr. Attracta Brennan

Quality of Immersive Multimedia Experiences

This special session will bring experts from academia and industry to present and discuss current and future research on immersive multimedia quality, quality of experience (QoE) and user experience (UX) of interactive and immersive multimedia experiences. It will foster exchanges between multidisciplinary communities from a range of technical, human centred and application domains. An overarching focus of the session will be to determine and understand the various types of human system and context factors that contribute to the utility of these emerging technologies and experiences.

Contact Person(s)
Dr. Niall Murray

Potential topics include but are not limited to the following:

  • QoE of Immersive eXtended Reality Experiences (incl. VR, AR, MR).
  • QoE of Multisensory Multimedia Experiences (incl. haptic, olfaction etc.).
  • Extended Reality applied to different application domains (health, education, tourism, smart manufacturing etc.)
  • Games user research and experience
  • Physiological metrics and QoE
  • Artificial Intelligence and QoE.
  • Quality of Immersive Audio experiences.
  • Novel assessment and evaluation methodologies.
  • Human behaviour analysis and QoE.
  • New Interaction paradigms and synchronization.
  • Quality of Interaction.
  • Affective Multimedia Experiences.
  • QoE-based management in a networked world.
  • New Interaction paradigms and synchronization.
  • Personalised multimedia experiences

Data Augmentation Techniques for CNN Models

The performance of DL models can still be limited due to a lack of diverse and representative training samples. Data augmentation techniques address this challenge by artificially expanding the training dataset through various transformations and modifications applied to existing images. By increasing variability and diversity, data augmentation enhances the model's ability to generalize and learn robust features.

This special session at the IMVIP conference focuses on data augmentation advancements, encompassing innovative methods, domain-specific approaches, automated strategies, evaluation metrics, and their influence on computer vision tasks. Emphasis will be placed on the improvement of model performance through application of data augmentation techniques on image, video, non-image data and hybrid data sources.

Contact Person(s):
Teerath Kumar Menghwar

Perception and Sensing with Machine Vision

Machine vision has emerged as a powerful tool for perception and sensing in various domains, revolutionizing industries such as autonomous driving, manufacturing, and robotics. This special session, titled "Perception and Sensing with Machine Vision," aims to explore the latest advancements in the field and highlight state-of-the-art methods that contribute to the development of robust perception systems.

The session will showcase cutting-edge techniques and technologies, including deep learning, sensor fusion, and multi-modal data analysis, that enable machine vision to enhance perception and sensing capabilities across diverse applications for autonomous vehicles, robotics, etc. As an example, in the context of autonomous driving, the session will examine the pivotal role of machine vision in enabling vehicles to perceive and understand their environment, make intelligent decisions, and ensure the safety of passengers.

Moreover, the session will investigate the impact of machine vision on manufacturing processes, emphasizing its contribution to quality control, optimized production, and increased efficiency. Additionally, the session will delve into the latest trends in human-robot collaboration, emphasizing the role of machine vision in enabling robots to perceive and interact with humans and their surroundings effectively. By facilitating better perception and sensing capabilities, machine vision transforms human-robot collaboration across a range of applications, from industrial settings to healthcare and service robotics.

This special session will bring together researchers, experts, and industry professionals to share their knowledge, experiences, and latest findings in the field of perception and sensing with machine vision. Through engaging presentations and discussions, we will foster interdisciplinary collaboration, encouraging fruitful exchanges between academia and industry.

Contact Person(s):
Ihsan Ullah
Ganesh Sistu
Arindam Das
Michael Madden

Potential topics include but are not limited to the following:

  • Deep learning-based approaches for perception and sensing
  • Sensor fusion techniques for enhancing perception capabilities.
  • Multi-modal data analysis for robust perception systems
  • Real-time scene understanding and situation awareness using machine vision.
  • Advanced image processing techniques for quality control in manufacturing
  • Machine vision applications for optimizing production processes.
  • Human-robot interaction and collaboration enabled by machine vision.
  • Vision-based object recognition and tracking algorithms for robotics.
  • Machine vision systems for autonomous navigation in challenging environments
  • Machine vision in healthcare robotics: perception and sensing for medical applications.
  • Augmented reality and machine vision integration for enhanced perception
  • Machine vision for gesture and emotion recognition in human-robot interaction
  • Ethical considerations and challenges in machine vision-based perception systems
  • Integration of machine vision with decision-making algorithms
  • Ethical considerations and safety aspects of machine vision-enabled system
  • Quality control and inspection using machine vision in manufacturing processes.
  • Human-machine interaction and collaboration in industrial robotics enabled by machine vision.

 

To submit your paper to a Special Session, please

1) Follow the submission guide for authors available here.

2) Select the specific Special Session you are interested in as your primary subject area. For example, if you are interested in submitting your paper in Special Session 1 (Solutions for Mission Vision Data Privacy Challenge), please select the following as your primary subject area:

   Special Session 1: Solutions for Mission Vision Data Privacy Challenge

3) Finally, upload your manuscript to the system.

Thank you.

On the first day of IMVIP 2023 we will have several innovative training workshops where you can learn about some of the latest techniques and methodologies to help inform your research and meet some leading researchers in the fields of Machine Vision and Image Processing.

Workshop #1: Synthetic Data Methods

In this workshop we will present different computer vision-based algorithms for generating large scale synthetic data. The workshop will cover single domain and multi domain GANs which includes StyleGAN2, STARGAN, Cycle GAN and PIX2PIX. Secondly, we will introduce diffusion models and explain text to image translation using stable diffusion models. We will show the demos of GANS and ControlNet diffusing models to show how it can be used for generating test synthetic data. Lastly, we will also present results from our recent ChildGAN work.

Workshop #2: Large Scale Multimodal Image/Video Data Acquisition on Human Subjects

In this workshop you will get some insights into one of the large-scale data acquisitions undertaken by Xperi, one of the global leaders in Driver Monitoring technology. The company is currently undertaking a drowsiness study on c.500 human subjects where they will record their activity on a state-of-art driving simulator. If you participate in this workshop you will get an overview of some of the data collection and annotation techniques used. You’ll also learn the importance of data synchronization between video data and other bio-sensor data, including multi-channel EEG.

Your workshop presenter will be Joe Lemley of Xperi and members of his research team and there will be opportunities for interactive discussions with the presenters.

Workshop #3: Sensing without Seeing: Event Cameras & Neuromorphic Vision Systems

Event-Cameras track individual changes in pixel illumination and export these as a stream of events. They are asynchronous and as there are no ‘image frames’ they detect motion or changes in lighting at a much faster rate than conventional cameras. And as there are no ‘images’ they can be used in sensing applications without a need to export image frames or video data – very useful for applications where data privacy is needed.  

In this training session you’ll learn about the latest Event-Camera technology and see some live demos of how it can be applied for facial analysis. You will learn about Event-Camera bias functions which help to tune the camera sensitivity for different lighting and motion levels. A number of example use cases will include face detection & tracking, yawn detection, blink detection, human action and activity detection and the fine-tuning of the camera biases to optimize the performance of a prototype driver monitoring system based on Neuromorphic Vision.