IJCB Awards 2020

To acknowledge scientific excellence several awards were given out at IJCB 2020 in different categories. Congratualtions to all the winners!

  • IJCB 2020 Google Best Paper Award
  • Akinori F Ebihara, Kazuyuki Sakurai, Hitoshi Imaoka: Specular- and Diffuse-reflection-based Face Spoofing Detection for Mobile Devices

  • IJCB 2020 Google Best Paper Award Runner-Up
  • Kunbo Zhang, Zhenteng Shen, Yunlong Wang, Zhenan Sun: All-in-Focus Iris Camera With a Great Capture Volume

  • IJCB 2020 Qualcomm PC Chairs Choice Best Student Paper Award
  • Philipp Terhorst, Daniel Fahrmann, Naser Damer, Florian Kirchbuchner, Arjan Kuijper: Beyond Identity: What Information Is Stored in Biometric Face Templates?

  • IJCB 2020 Qualcomm PC Chairs Choice Best Student Paper Award Runner-Up
  • German Barquero, Carles Fernandez Tena, Isabelle Hupont: Long-Term Face Tracking for Crowded Video-Surveillance Scenarios

  • IAPR TC4 Best Biometric Student Paper Award
  • Fariborz Taherkhani, Veeru Talreja, Jeremy Dawson, Matthew Valenti, Nasser Nasrabadi: PF-cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild

  • IJCB/BTAS Five-Year Highest Impact Award
  • Swami Sankaranarayanan, Azadeh Alavi, Carlos D. Castillo, Rama Chellappa: Triplet Probabilistic Embedding for Face Verification and Clustering (from BTAS 2016)

  • IJCB 2020 Competition Winners
  • LivDet - Iris Competition 2020: Juan Tapia, Universidad de Santiago – Chile, Sebastian Gonzalez, TOC Biometrics – Chile

    Sclera Segmentation Benchmarking Competition 2020: Sumanta Das, University of Engineering and Management, India, Ishita De Ghosh, Barrackpore Rastraguru Surendranath College, India

  • IJCB2020 Audience’s Choice Day 1 Presentation Award
  • Philipp Terhorst, Daniel Fahrmann, Naser Damer, Florian Kirchbuchner, Arjan Kuijper: Beyond Identity: What Information Is Stored in Biometric Face Templates?

  • IJCB2020 Audience’s Choice Day 2 Presentation Award
  • Yashasvi Baweja, Poojan B Oza, Pramuditha Perera, Vishal Patel: Anomaly Detection-Based Unknown Face Presentation Attack Detection

  • IJCB2020 Audience’s Choice Day 3 Presentation Award

    Shruti Nagpal, Maneet Singh, Mayank Vatsa, Richa Singh: De-biasing Existing Classification Models Using Diversity Blocks

  • IJCB Best Reviewers - Honorable Mentions
    • Allan Pinto
    • Alessandra Paulino
    • Ana Rebelo
    • Ana Sequeira
    • Andrey Kuehlkamp
    • Anil Jain
    • Christoph Busch
    • Henrique Sergio Costa
    • Hu Han
    • James Wayman
    • John Howard
    • Manuel Gunther
    • Maria De Marsico
    • PongChi Yuen
    • Ross Beveridge
    • Stephanie Schuckers
    • Terence Sim
    • Vutipong Areekul

Accepted Competitions

Liveness Detection Competition - Iris (LivDet-Iris 2020)

LivDet-Iris competitions serve as the most important benchmarks in iris presentation attack detection published every two years by offering (a) independent assessment of current state of the art in iris PAD, and (b) evaluation protocol, including publicly available datasets of spoof and live iris images that can be easily followed by researchers after the competition is closed and compares their solutions with LivDet-Iris winners. The main research question to be answered by this competition is the current state of the art in detection unknown attacks (i.e., when the spoof type is not given to the algorithm) to iris recognition systems. This is currently one of the most important research efforts related to security of iris recognition systems.

Competition Website: http://www.iris2020.livdet.org/
Competition starting-ending: May 30 - July 15

Competition Schedule:
  • May 15, 2020: Announcement of the competition including participation criteria, tasks, description of the datasets and evaluation metrics. Active promotion of the competition started.
  • May 30, 2020: The datasets and BEAT platform made available to the participants.
  • July 15, 2020: Participant's deadline for submission of final results. Optional: short descriptions of the methods of non-anonymous participants who want to co-author the IJCB paper.
  • August 15, 2020: The paper summarizing the competition submitted to IJCB.
Organizers:
  • University of Notre Dame, USA:
    • Dr. Adam Czajka, aczajka@nd.edu
    • Dr. Kevin Bowyer, kwb@nd.edu
    • Aidan Boyd, aboyd3@nd.edu
    • Joseph McGrath, jmcgrat3@alumni.nd.edu
  • Clarkson University, USA:
    • Dr. Stephanie Schuckers, sschucke@clarkson.edu
    • Priyanka Das, prdas@clarkson.edu
    • Sandip Purnapatra, purnaps@clarkson.edu
    • David Yambay, yambayda@clarkson.edu
  • Warsaw University of Technology, Poland:
    • Dr. Mateusz Trokielewicz, mateusz.trokielewicz@pw.edu.pl
  • Medical University of Warsaw, Poland:
    • Dr. Piotr Maciejewicz, piotr.maciejewicz@wum.edu.pl
  • Idiap Research Institute, Switzerland:
    • Dr. Sebastien Marcel, marcel@idiap.ch
    • Dr. Amir Mohammadi, amir.mohammadi@idiap.ch

Sclera Segmentation Benchmarking Competition (SSBC 2020)

Sclera biometrics have gained significant popularity among emerging ocular traits in the last few years. In order to establish this idea and to evaluate the potential of this trait, various pieces of work have been proposed in the literature, both employing sclera individually and in combination with the iris. In spite of those initiatives, sclera biometrics need to be studied more extensively to ascertain their usefulness. Moreover segmentation tasks of sclera should receive more attention due to pertaining challenges in cross-sensor and resolution. In order to fulfill the above mentioned aims, to document recent development and to attract the attention/interest of researchers we are planning to host the competition. In this competition we aim to benchmark the performance of sclera segmentation on both cross sensor, and low and high resolution images.

Competition Website: https://sites.google.com/view/ssbc2020/home
Competition starting-ending: May 31 - Aug 10

Competition Schedule:
  • Site opens 31st May 2020
  • Registration starts 31st May 2020
  • Test dataset available 31st May 2020
  • Registration closes 10th Aug 2020
  • Algorithm submission deadline 10th Aug 2020
  • Results and report announcement 15th Aug 2020
Organizers:
  • Indian Statistical Institute, Kolkata
    • Dr. Abhijit Das, abhijitdas2048@gmail.com
    • Dr. Umapada Pal, umapada@isical.ac.in
  • University of Ljubljana, Ljubljana, Slovenija
    • Dr. Matej Vitek, matej.vitek@fri.uni-lj.si
    • Dr. Peter Peer, peter.peer@fri.uni-lj.si
    • Dr. Vitomir Struc, vitomir.struc@fe.uni-lj.si

Face Morphing Attack Detection (MAD)

This competition is framed into the context of face recognition in machine-readable travel documents (eMRTD) where biometric recognition has been widely introduced to increase the security in the border control procedures, and to enable automatic verification at dedicated gates (Automated Border Control systems - ABC gates) for the convenience of some categories of travelers. Face morphing detection is a quite recent research topic and a few efforts have been devoted so far to the definition of public benchmarks and evaluation platforms. The evaluation here proposed focuses specifically on a high-quality dataset and on the algorithm evaluation in realistic operational conditions. Several papers on morphing detection have been recently proposed, but they always report results on small, internally acquired datasets, thus making impossible an objective evaluation and comparison of different approaches. The outcomes of some studies show that algorithms usually perform well when tested on data similar to those used for training, but the performance significantly degrades when tested on new, unseen data (e.g. images created with other morphing algorithms, different image processing techniques, different resolution, etc.). An analysis of the real current state of the art is needed and only an independent evaluation on high-quality and heterogeneous data can provide an objective assessment.

Competition Website: https://biolab.csr.unibo.it/fvcongoing/UI/Form/IJCB2020MAD.aspx
Competition starting-ending: June 1 - July 15

Competition Schedule:
  • Competition starts 1st June 2020
  • Competition submission deadline 15th July 2020
Organizers:
  • University of Bologna
    • Matteo Ferrara, matteo.ferrara@unibo.it
    • Annalisa Franco, annalisa.franco@unibo.it
    • Davide Maltoni, davide.maltoni@unibo.it
  • Norwegian University of Science and Technology
    • Christoph Busch, christoph.busch@ntnu.no
    • Raghavendra Ramachandra, raghavendra.ramachandra@ntnu.no
    • Kiran Raja, kiran.raja@ntnu.no
  • University of Twente
    • Luuk Spreeuwers, l.j.spreeuwers@utwente.nl
    • Raymond Veldhuis, R.N.J.Veldhuis@utwente.nl
    • Ilias Batskos, i.batskos@utwente.nl
  • Hochschule Darmstadt
    • Christian Rathgeb, christian.rathgeb@h-da.de
    • Ulrich Scherhag, ulrich.scherhag@h-da.de

Accepted Papers

"IJCB 2020 received 211 submissions and a total of 83 was accepted for presentation at the conference. From the 211 submitted papers 24 were resubmitted during the review process and reviewed twice. The acceptance rate was 39.3% when considering the number of distinct submitted papers (83/211*100%) and 35.3% when considering the number of reviewed submissions (83/235*100%)."

  • A Progressive Stack Face-based Network for Detecting Diabetes Mellitus and Breast Cancer.
    Jianhang Zhou (University of Macau); Qi Zhang (University of Macau); Bob Zhang (Univerisity of Macau)*

  • White-Box Evaluation of Fingerprint Matchers: Robustness to Minutiae Perturbations.
    Steven A Grosz (Michigan State University)*; Anil Jain (Michigan State University); Joshua J Engelsma (Michigan State University); Nick Paulter (National Institute of Standards and Technology (NIST))

  • LDM-DAGSVM: Learning Distance Metric via DAG Support Vector Machine for Ear Recognition.
    Ibrahim Omara (Huazhong University of Science and Technology )*; Guangzhi Ma (Huazhong University of Science and Technology); Enmin Song (Huazhong University of Science and Technology )

  • Is Face Recognition Safe from Realizable Attacks?
    Sanjay Saha (NUS)*; Terence Sim (NUS)

  • FGAN: Fan-Shaped GAN for Racial Transformation.
    Ge Jiancheng (Beijing University of Posts and Telecommunications)*; Weihong Deng (Beijing University of Posts and Telecommunications); Mei Wang (Beijing University of Posts and Telecommunications); Jiani Hu (Beijing University of Posts and Telecommunications)

  • Specular- and Diffuse-reflection-based Face Spoofing Detection for Mobile Devices.
    Akinori F Ebihara (NEC Biometrics Research Laboratories)*; Kazuyuki Sakurai (NEC Biometrics Research Laboratories); Hitoshi Imaoka (NEC Corporation)

  • How Does Gender Balance In Training Data Affect Face Recognition Accuracy?.
    Vitor Albiero (University of Notre Dame)*; Kai Zhang (Universify of Notre Dame); Kevin Bowyer (University of Notre Dame)

  • Identity Document to Selfie Face Matching Across Adolescence.
    Vitor Albiero (University of Notre Dame)*; Nisha Srinivas (UNCW); Esteban Villalobos (Universidad Catolica de Chile); Jorge Perez-Facuse (Universidad Catolica de Chile); Roberto Rosenthal (BiometryPass); Kevin Bowyer (University of Notre Dame); Karl Ricanek (University of North Carolina Wilmington); Domingo Mery (Universidad Catolica de Chile)

  • FEBA - An Anatomy Based Finger Vein Classification.
    Arya Krishnan (IIITMK)*; Gayathri Nayar (IIITMK ); Ake Nystrom (Stavenger University Hospital); TONY THOMAS (IIITM)

  • Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition
    Philipp Terhorst (Fraunhofer Institute for Computer Graphics Research IGD)*; Jan Kolf (TU Darmstadt); Naser Damer (Fraunhofer IGD); Florian Kirchbuchner ( Fraunhofer Institute for Computer Graphics Research IGD); Arjan Kuijper (Fraunhofer Institute for Computer Graphics Research IGD and Mathematical and Applied Visual Computing group, TU Darmstadt)

  • DVRNet: Decoupled Visible Region Network for Pedestrian Detection.
    Lei Shi (University of Houston)*; Charles M Livermore (University of Houston); Ioannis Kakadiaris (University of Houston)

  • PF-cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild.
    Fariborz Taherkhani (West Virginia University)*; Veeru Talreja (West Virginia University); Jeremy Dawson (West Virginia University); Matthew Valenti (West Virginia University); Nasser Nasrabadi (West Virginia University)

  • Recognition Oriented Iris Image Quality Assessment in the Feature Space.
    Leyuan Wang (CASIA)*; Kunbo Zhang (Institute of Automation, Chinese Academy of Sciences); Min Ren (Center for Research on Intelligent Perception and Computing (CRIPAC), Institute of Automation, Chinese Academy of Sciences (CASIA), University of Chinese Academy of Sciences(UCAS)); Yunlong Wang (Center for Research on Intelligent Perception and Computing (CRIPAC) National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences (CASIA) ); Zhenan Sun (Chinese of Academy of Sciences)

  • Is Warping-based Cancellable Biometrics (still) Sensible for Face Recognition ?.
    Simon Kirchgasser (University of Salzburg)*; Andreas Uhl (University of Salzburg); Yoanna Martinez Diaz (Advanced Technologies Application Center, CENATAV); Heydi Mendez-Vazquez (CENATAV)

  • Cross-Spectral Periocular Recognition with Conditional Adversarial Networks.
    Kevin Hernandez-Diaz (Halmstad University); Fernando Alonso-Fernandez (Halmstad University)*; Josef Bigun (Halmstad University, Sweden)

  • Learning to Learn Face-PAD: a lifelong learning approach
    Daniel Perez-Cabo (Gradiant)*; David Jimenez-Cabello (Gradiant); Artur Costa-Pazo (Gradiant); Roberto Javier Lopez-Sastre (University of Alcala)

  • Facial landmark detection on thermal data via fully annotated visible-to-thermal data synthesis.
    Khawla Mallat (EURECOM)*; Jean-Luc Dugelay (EURECOM, Campus SophiaTech)

  • Distinctive Feature Representation for Contactless 3D Hand Biometrics using Surface Normal Directions.
    Kevin H. M. Cheng (The Hong Kong Polytechnic University); Ajay Kumar (The Hong Kong Polytechnic University)*

  • Using Deep Learning for Fusion of Eye and Mouse Movement based User Authentication.
    Yudong Liu (Western Washington University)*; Yusheng Jiang (Western Washington University); John Devenere (Western Washington University)

  • Leveraging Auxiliary Tasks for Height and Weight Estimation by Multi Task Learning.
    Dan Han (Institute Of Computing Technology, Chinese Academy Of Sciences)*; Jie Zhang (ICT, CAS); Shiguang Shan (Institute of Computing Technology, Chinese Academy of Sciences)

  • Sensitivity of Age Estimation Systems to Demographic Factors and ImageQuality: Achievements and Challenges.
    Ali Akbari (University of Surrey)*; Muhammad Awais (University of Surrey); Josef Kittler (University of Surrey, UK)

  • Micro Stripes Analyses for Iris Presentation Attack Detection.
    Meiling Fang (Fraunhofer Institute for Computer Graphics Research IGD)*; Naser Damer (Fraunhofer IGD); Florian Kirchbuchner ( Fraunhofer Institute for Computer Graphics Research IGD); Arjan Kuijper (Fraunhofer Institute for Computer Graphics Research IGD and Mathematical and Applied Visual Computing group, TU Darmstadt)

  • Spoof-Proof Biometric Identification using Micro-Movements of the Eyes.
    Silvia Makowski (University of Potsdam)*; Lena A. Jager (University of Potsdam); Paul Prasse (University of Potsdam); Tobias Scheffer (University of Potsdam)

  • Backdooring Convolutional Neural Networks via Targeted Weight Perturbations.
    Jacob Dumford (University of Notre Dame); Walter Scheirer (University of Notre Dame)*

  • IHashNet: Iris Hashing Network based on efficient multi-index hashing.
    Avantika Singh (IIT Mandi)*; Pratyush Gaurav (Indian Institiute of Technology, Mandi); Chirag Vashist (Indian Institute of Technology, Mandi); Aditya Nigam (IIT mandi); Rameshwar Pratap Yadav (IIT Mandi)

  • Fingerprint Synthesis: Search with 100 Million Prints.
    Vishesh S Mistry (Michigan State University)*; Joshua Engelsma (Michigan State University); Anil Jain (Michigan State University)

  • Fingerprint Spoof Detection: Temporal Analysis of Image Sequence.
    Tarang Chugh (Michigan State University)*; Anil Jain (Michigan State University)

  • On the Influence of Ageing on Face Morph Attacks: Vulnerability and Detection.
    Sushma Venkatesh (NTNU)*; Kiran Raja (NTNU); Raghavendra Ramachandra (NTNU, Norway); Christoph Busch (Norwegian University of Science and Technology)

  • Are Gabor Kernels Optimal for Iris Recognition?
    Aidan Boyd (University of Notre Dame)*; Adam Czajka (University of Notre Dame); Kevin Bowyer (University of Notre Dame)

  • On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR Applications.
    Fadi Boutros (Fraunhofer IGD)*; Naser Damer (Fraunhofer IGD); Kiran Raja (NTNU); Raghavendra Ramachandra (NTNU, Norway); Florian Kirchbuchner ( Fraunhofer Institute for Computer Graphics Research IGD); Arjan Kuijper (Fraunhofer Institute for Computer Graphics Research IGD and Mathematical and Applied Visual Computing group, TU Darmstadt)

  • Characterizing Light-Adapted Pupil Size in the NIR Spectrum.
    A D Clark (LPS)*; Thirimachos Bourlai (West Virginia University)

  • Face Recognition Oak Ridge (FaRO): A Framework for Distributed and ScalableBiometrics Applications.
    Joel R Brogan (Oak Ridge National Laboratory)*; David Cornett (Oak Ridge National Laboratory); David Bolme (Oak Ridge National Labs); Nisha Srinivas (UNCW)

  • DBLFace: Domain-Based Labels for NIR-VIS Heterogeneous Face Recognition.
    Ha A Le (University of Houston)*; Ioannis Kakadiaris (University of Houston)

  • Unconstrained Face Identification using Ensembles trained on Clustered Data.
    Rafael Henrique Vareto (Universidade Federal de Minas Gerais)*; William R Schwartz (Federal University of Minas Gerais)

  • Dense-View GEIs Set: View Space Covering for Gait Recognition based on Dense-View GAN.
    Rijun Liao (University of Missouri-Kansas City)*; weizhi an (UTA); Shiqi Yu (Southern University of Science and Technology, China); Zhu Li (university of missouri-kansas city); Yongzhen Huang (Institute of Automation,Chinese Academy of Sciences)

  • Mobile Twin Recognition
    Vitaly Gnatyuk (Samsung Research Russia)*; Alena Moskalenko (MIPT)

  • Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems.
    Huy Nguyen (SOKENDAI)*; Junichi Yamagishi (National Institute of Informatics); Isao Echizen (National Institute of Informatics); Sebastien Marcel (Idiap Research Institute)

  • Baracca: a Multimodal Dataset for Anthropometric Measurements in Automotive.
    Stefano Pini (University of Modena and Reggio Emilia)*; Andrea D'Eusanio (University of Modena and Reggio Emilia); Guido Borghi (University of Bologna); ROBERTO VEZZANI (University of Modena and Reggio Emilia, Italy); Rita Cucchiara (Universita Di Modena E Reggio Emilia)

  • Iterative Weak/Self-Supervised Learning Frameworks for Abnormal Events Detection.
    Bruno Degardin (IT: Instituto de Telecomunicacoes); Hugo Proenca (U-Beira Interior)*

  • Finding the Suitable Doppelganger for a Face Morphing Attack.
    Alexander Rottcher (Ruhr-Universitat Bochum)*; Ulrich Scherhag (Hochschule Darmstadt); Christoph Busch (Norwegian University of Science and Technology)

  • All-in-one 'HairNet': A Deep Neural Model for Joint Hair Segmentation and Characterization.
    Diana Borza (Babes Bolyai University, Cluj-Napoca)*; Ehsan Yaghoubi (University of Beira Interior, Covilha); Joao Neves (Tomiworld); Hugo Proenca (U-Beira Interior)

  • Long-Term Face Tracking for Crowded Video-Surveillance Scenarios
    German Barquero (Universitat de Barcelona)*; Carles Fernandez Tena (Herta Security); Isabelle Hupont (Herta Security)

  • AdvFaces: Adversarial Face Synthesis.
    Debayan Deb (Michigan State University)*; Jianbang Zhang (Lenovo); Anil Jain (Michigan State University)

  • Anomaly Detection-Based Unknown Face Presentation Attack Detection.
    Yashasvi Baweja (Johns Hopkins University)*; Poojan B Oza (Johns Hopkins University); Pramuditha Perera (Johns Hopkins University); Vishal Patel (Johns Hopkins University)

  • How Confident Are You in Your Estimate of a Human Age? Uncertainty-aware Gait-based Age Estimation by Label Distribution Learning.
    Atsuya Sakata (Osaka University); Yasushi Makihara ("""Osaka University, Japan""")*; Noriko Takemura (Osaka University); Daigo Muramatsu (Seikei University); Prof. Yasushi Yagi (Osaka University)

  • Partial Fingerprint Verification via Spatial Transformer Networks
    Zhiyuan He (Zhejiang University); Eryun Liu (Zhejiang University)*; Zhiyu Xiang (Zhejiang University)

  • TypeNet: Scaling up Keystroke Biometrics
    Alejandro Acien (Universidad Autonoma de Madrid)*; John V Monaco (Naval Postgraduate School); Aythami Morales (Universidad Autonoma de Madrid); Ruben Vera-Rodriguez (Universidad Autónoma de Madrid); Julian Fierrez (Universidad Autonoma de Madrid)

  • Fingerprint Presentation Attack Detection: A Sensor and Material Agnostic Approach.
    Steven A Grosz (Michigan State University)*; Tarang Chugh (Michigan State University); Anil Jain (Michigan State University)

  • Analysis of Dilation in Children and its Impact on Iris Recognition.
    Priyanka Das (Clarkson University)*; Stephanie Schuckers (Clarkson University); Laura Holsopple (Clarkson University)

  • Resist : Reconstruction of irises from templates.
    Sohaib Ahmad (University of Connecticut)*; Benjamin Fuller (University of Connecticut)

  • Cross-Spectral Iris Matching Using Coupled cGAN.
    Moktari Mostofa (West Virginia University)*; Fariborz Taherkhani (West Virginia University); Jeremy Dawson (West Virginia University); Nasser Nasrabadi (West Virginia University)

  • Feature map masking based single-stage face detection.
    Xi Zhang (Shenzhen University); Junliang Chen (Shenzhen University); Weicheng Xie (Shenzhen University); Linlin Shen (Shenzhen University)*

  • Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition
    Hui Ding (University of Maryland, College Park)*; Peng Zhou (University of Maryland); Rama Chellappa (University of Maryland)

  • Assessing the Quality of Swipe Interactions for Mobile Biometric Systems.
    Marco Santopietro (University of Kent)*; Ruben Vera-Rodriguez (Universidad Autonoma de Madrid); Richard M Guest ("University of Kent, Canterbury"); Aythami Morales (Universidad Autonoma de Madrid); Alejandro Acien (Universidad Autonoma de Madrid)

  • Touch Behavior Based Age Estimation Toward Enhancing Child Safety
    Md S Hossain (Southern Connecticut State University)*; Carl Haberfeld (Southern Connecticut State University)

  • All-in-Focus Iris Camera With a Great Capture Volume.
    Kunbo Zhang (Institute of Automation, Chinese Academy of Sciences)*; Zhenteng Shen (Tianjin Academy for Intelligent Recognition Technologies); Yunlong Wang (Center for Research on Intelligent Perception and Computing (CRIPAC) National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences (CASIA) ); Zhenan Sun (Chinese of Academy of Sciences)

  • Domain Private and Agnostic Feature for Modality Adaptive Face Recognition.
    Yingguo Xu ( Chongqing University)*; Lei Zhang (Chongqing University); Qingyan Duan (Chongqing University)

  • 3DPC-Net: 3D Point Cloud Network for Face Anti-spoofing
    Xuan Li (Beijing Jiaotong University); Jun Wan (NLPR, CASIA); Yi Jin (Beijing JiaoTong University)*; Ajian Liu (MUST); Guodong Guo (Baidu); Stan Z. Li (Westlake University)

  • How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals.
    Umur A Ciftci (State University of New York at Binghamton); Ilke Demir (Intel)*; Lijun Yin (State University of New York at Binghamton)

  • DeformGait: Gait Recognition under Posture Changes using Deformation Patterns between Gait Feature Pairs.
    Chi Xu (Nanjing University of Science and Technology)*; Daisuke Adachi (Osaka University); Yasushi Makihara ("""Osaka University, Japan"""); Prof. Yasushi Yagi (Osaka University); Jianfeng Lu (Nanjing University of Science and Technology)

  • Your Tattletale Gait.
    Sanka Rasnayaka (National University of Singapore)*; Terence Sim (NUS)

  • Pixel Sampling for Style Preserving Face Pose Editing.
    Xiangnan YIN (Ecole Cenrale de Lyon)*; Di Huang (Beihang University, China); Hongyu Yang (Beihang University); Zehua Fu (Ecole Centrale de Lyon); Yunhong Wang (State Key Laboratory of Virtual Reality Technology and System, Beihang University, Beijing 100191, China); Liming Chen (Ecole Centrale de Lyon)

  • Beyond Identity: What Information Is Stored in Biometric Face Templates?.
    Philipp Terhorst (Fraunhofer Institute for Computer Graphics Research IGD)*; Daniel Fahrmann (Fraunhofer Institute for Computer Graphics Research IGD); Naser Damer (Fraunhofer IGD); Florian Kirchbuchner (Fraunhofer Institute for Computer Graphics Research IGD); Arjan Kuijper (Fraunhofer Institute for Computer Graphics Research IGD and Mathematical and Applied Visual Computing group, TU Darmstadt)

  • 3D Iris Recognition using Spin Images
    Daniel Benalcazar (Universidad de Chile); Daniel Montecino (Universidad de Chile); Jorge Zambrano (Universidad de Chile); Claudio A Perez (Universidad de Chile)*; Kevin Bowyer (University of Notre Dame)

  • An Assessment of GANs for Identity-related Applications.
    Richard T Marriott (Ecole Centrale de Lyon)*; Safa Madiouni (IDEMIA); Sami Romdhani (IDEMIA); Stephane Gentric (IDEMIA); Liming Chen (Ecole Centrale de Lyon)

  • Analysing the Performance of LSTMs and CNNs for Fingerprint Presentation Attack Detection.
    Jascha Kolberg (Hochschule Darmstadt)*; Alexandru-Cosmin Vasile (Technical University of Denmark); Marta Gomez-Barrero (Hochschule Ansbach); Christoph Busch (Hochschule Darmstadt)

  • Cross Modal Person Re-identification with Visual-Textual Queries.
    Ammarah Farooq (University of Surrey)*; Muhammad Awais (University of Surrey); Josef Kittler (University of Surrey, UK); Ali Akbari (University of Surrey); Syed Safwan Khalid (University of Surrey)

  • Inverse Biometrics: Reconstructing Grayscale Finger Vein Images from Binary Features.
    Christof Kauba (University of Salzburg)*; Simon Kirchgasser (University of Salzburg); Vahid Mirjalili (Michigan State University); Arun Ross (Michigan State University); Andreas Uhl (University of Salzburg)

  • Open Source Iris Recognition Hardware and Software with Presentation Attack Detection
    Zhaoyuan Fang (University of Notre Dame)*; Adam Czajka (University of Notre Dame)

  • iLGaCo: Incremental Learning of Gait Covariate Factors.
    Zihao Mu (Shenzhen University); Francisco M. Castro (University of Malaga); Manuel J. Marin-Jimenez (University of Cordoba); Nicolas Guil (University of Malaga); Yanran Li (Shenzhen University); Shiqi Yu (Southern University of Science and Technology, China)*

  • Cross-Domain Identification for Thermal-to-Visible Face Recognition.
    Cedric A Nimpa Fondje (University of Nebraska-Lincoln)*; Shuowen (Sean) Hu (ARL); Nathan Short (Booz allen Hamilton); Benjamin Riggan (University of Nebraska-Lincoln)

  • Clustered Dynamic Graph CNN for Biometric 3D Hand Shape Recognition.
    Jan Svoboda (NNAISENSE)*; Pietro Astolfi (University of Trento); Davide Boscaini (Fondazione Bruno Kessler); Jonathan Masci (NNAISENSE); Michael Bronstein (Imperial College / Twitter)

  • Swipe dynamics as a means of authentication: results from a Bayesian unsupervised approach.
    Parker Lamb (Callsign); Alexander Millar (Callsign); Ramon Fuentes (Callsign)*; Ramon Fuentes (Callsign)

  • D-NetPAD: An Explainable and Interpretable Iris Presentation Attack Detector.
    Renu Sharma (Michigan State University)*; Arun Ross (Michigan State University)

  • Gait Recognition Based on 3D Skeleton Data and Graph Convolutional Network
    Mao Mengge (Xi'an JiaoTong University)*; Yonghong Song (Xi'an JiaoTong University)

  • Leveraging edges and optical flow on faces for deepfake detection.
    Akash Chintha (Rochester Institute of Technology)*; Aishwarya Rao (Rochester Institute of Technology); Saniat Sohrawardi (Rochester Institute of Technology); Kartavya M Bhatt (Rochester Institute of Technology); Matthew Wright (Rochester Institute of Technology); Raymond Ptucha (Rochester Institute of Technology)

  • Fingerprint Feature Extraction by Combining Texture, Minutiae, and Frequency Spectrum Using Multi-Task CNN.
    Ai Takahashi (Tohoku University)*; Yoshinori Koda (NEC Corporation); Koichi Ito (Tohoku University); Takafumi Aoki (Tohoku University)

  • Modeling Score Distributions and Continuous Covariates: A Bayesian Approach.
    Mel McCurrie (Perceptive Automata)*; Hamish Nicholson (Harvard); Walter Scheirer (University of Notre Dame); Samuel Anthony (Perceptive Automata)

  • Gender and Ethnicity Classification based on Palmprint and Palmar Hand Images from Uncontrolled Environment.
    Wojciech M Matkowski (Nanyang Technological University)*; Wai-Kin Adams Kong (Nanyang Technological University)

  • A Metric Learning Approach to Eye Movement Biometrics.
    Dillon J Lohr (Texas State University)*; Henry Griffith (Michigan State University); Samantha Aziz (Texas State University); Oleg Komogortsev (Michigan State University)

  • De-biasing Existing Classification Models Using Diversity Blocks.
    Maneet Singh (IIIT-Delhi, India); Shruti Nagpal (IIIT-Delhi); Mayank Vatsa (IIT Jodhpur); Richa Singh (IIIT-Delhi)*

  • Development of deep clustering model to stratify occurrence risk of diabetic foot ulcers based on foot pressure patterns and clinical indices.
    Xuanchen Ji (Nagoya University)*; Yasuhiro Akiyama (Nagoya University); Hisae Hayashi (Seijo University); Yoji Yamada (Nagoya university); Shogo Okamoto (Nagoya university)

  • A Brief Literature Review and Survey of Adult Perceptions on Biometric Recognition for Infants and Toddlers.
    Tempestt Neal (USF)*; Ashokkumar Patel (Florida Polytechnic University)

The Oral/Short oral designation can be found HERE."

Keynote Speakers

1. Hany Farid,University of California, Berkeley

Hany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.

Detecting Deep Fake Videos

Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. I will describe a series of forensic technique for detecting face-swap deep fakes and will show the efficacy of these techniques across several large-scale video datasets, as well as in-the-wild deep fakes.

For more details visit his webpage: https://farid.berkeley.edu/

2. Tal Hassner, Facebook AI

Tal Hassner received his M.Sc. and Ph.D. degrees in applied mathematics and computer science from the Weizmann Institute of Science in 2002 and 2006, respectively. In 2008 he joined the Department of Math. and Computer Science at The Open Univ. of Israel where he is was a Associate Professor until 2018. From 2015 to 2018, he was a senior computer scientist at the Information Sciences Institute (ISI) and a Visiting Research Associate Professor at the Institute for Robotics and Intelligent Systems, Viterbi School of Engineering, both at USC, CA, USA, working on the IARPA Janus face recognition project. From 2018 to 2019, he was a Principal Applied Scientist at AWS where he led the design and development of the latest AWS face recognition pipelines. Since 2019 he is an Applied Research Lead at Facebook AI, supporting both the text and people photo understanding teams.

Title: Face swapping and manipulation: Past, Present, and Future

Abstract: Interest in DeepFakes and their far-reaching implications seems only to be growing. In this talk, I will survey the history of face manipulation techniques: how and why these methods came to be proposed long before the term DeepFake was coined. I will show that manipulating faces was technically quite simple, even before the introduction of deep learning techniques, but why deep learning contributed to making these methods household names. I will offer examples of how face manipulation methods can be used for good and, in particular, how they can be used to reduce training set biases as well as help us, as a community, better respect people's privacy. Finally, I will survey ongoing efforts for detecting such manipulations. None of the material reported in the presentation was produced or used at Facebook in any way (with the exception of a short discussion on the DeepFake Detection Challenge)

3. Mark Morse, Vice President Humana Pharmacy Service Operations

In his current role, Mark is responsible for the consumer experience that Humana members have when they utilize Humana Pharmacy. He is also responsible for the articulation of the Pharmacy digital and self-service strategy and ensuring alignment between initiatives, the strategy, and attainment of KPIs to track progress. Reporting to Mark are the following teams:

  • Humana Pharmacy and Humana Specialty Pharmacy contact center teams responsible for helping members receive their medications via home delivery and resolving issues or questions that they may have,
  • RxEducation that is charged with informing members of their savings opportunities via home delivery and assisting in the transition from retail to mail order if the member requests it,
  • Consumer Experience Optimization that is responsible for utilizing data from our consumers and associates to understand the issues that our customers have when utilizing their pharmacy benefit and identifying the initiatives needed to remove those abrasion points, and
  • Consumer Automation that is responsible for the management of the IVR and other automated campaigns to engage members via self-service.

  • Educational background:
    • Bachelor of Science in Management Systems from Rensselear Polytechnic Institute

    Professional Background:
    • Joined Humana in 2001 with roles including: leader for self-funded capability development, subject matter expert in consumer directed healthcare, corporate sales director for national & major accounts, and practice leader for PBM sales and client services.
    • Six years with UnitedHealth Group serving as the business owner for its consumer portal and operations director for health plan migrations
    • Seven years of business system and reengineering consulting with Andersen Consulting.

    Title: Voice Of the Customer: Challenges and Opportunities
    Abstract: In this work, I will present the work that Humana Pharmacy has done towards creating perfect experiences for our customers. I will review how we measure the quality of the experience, the challenges that our associates face, share our experience with a commercially available solution that we have deployed, and outline the next steps we are taking in our journey to use artificial intelligence to help enhance the emotional intelligence of our associates.

    4. Nasir Memon, NYU Tandon School of Engineering

    Nasir Memon is Vice Dean for Academics and Student Affairs and a Professor of Computer Science and Engineering at the New York University Tandon School of Engineering. He is an affiliate faculty at the Computer Science department in NYU's Courant Institute of Mathematical Sciences, and department head of NYU Tandon Online. He introduced cyber security studies to NYU Tandon in 1999, making it one of the first schools to implement the program at the undergraduate level. He is a co-founder of NYU's Center for Cyber Security (CCS) at New York as well as NYU Abu Dhabi. He is the founder of the OSIRIS Lab, CSAW, The Bridge to Tandon Program as well as the Cyber Fellows program at NYU. He has received several best paper awards and awards for excellence in teaching. He has been on the editorial boards of several journals, and was the Editor-In-Chief of the IEEE Transactions on Information Security and Forensics. He is an IEEE Fellow and an SPIE Fellow for his contributions to image compression and media security and forensics. His research interests include digital forensics, biometrics, data compression, network security and security and human behavior.

    Title: A Proactive Approach to Media Authenticity
    Abstract: Recent emergence of deepfakes has brought the looming media authenticity crisis into the spotlight. The traditional approach to address this problem is to perform forensic analysis of digital media. Unfortunately, most forensic traces are fragile and are often destroyed by common post-processing (e.g., compression). Moreover, increasing processing complexity makes formal modeling intractable and increases our reliance on non-interpretable black-box methods with few performance guarantees. The situation will get worse with the advent of computational imaging However, computational photography brings not only challenges but also opportunities. We explore end-to-end modeling and optimization of photo acquisition and distribution channels. Modern machine learning frameworks allow for optimization of various components to facilitate photo manipulation detection. Our work shows that such optimization can bring substantial benefits.

    Tutorials

    Tutorial #1 Title: Gait accelerometry as a passive biometric: Promise and pitfalls [9 sections: ~ 20 minutes each]
    Allocated time: 10:00-12:30, Monday, September 28th 2020

    Bio:
    Vinay Prabhu is currently on a mission to model human kinematics using motion sensors on smartphones paving the way for numerous breakthroughs in areas such as passive password-free authentication, geriatric care, neuro-degenerative disease modeling, fitness and augmented reality. He is currently the Chief Scientist at UnifyID Inc and has worked in areas spanning Physical layer wireless communications, Estimation theory, Information Theory, Network Sciences and Machine Learning. His recent research projects include Deep Connectomics networks, Grassmannian initialization, SAT: Synthetic-seed-Augment-Transfer framework and the Kannada-MNIST dataset. He holds a PhD from Carnegie Mellon University and an MS from Indian Institute of Technology-Madras. In his spare time, he works on his cricketing skills and generative art projects, some of which have made it to the playa at Black Rock City.

    Abstract: In recent times, password based user authentication methods have increasingly drawn the ire of the security community, especially when it comes to its prevalence in the world of mobile telephony.
    Researchers recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker, thus making the human, the weak link from a security perspective.
    One of the promising solutions that promises to flip this entails implicit authentication of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle signatures, mined using the on-phone Inertial Measurement Unit - MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric. These gait patterns can not only be collected passively and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.
    Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection, researchers in the field gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions, thus replacing the more traditional hand-crafted-feature engineering-driven shallow machine-learning approaches. Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise , which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware.
    In this tutorial, we will take a deep dive into the specific DL architectures and algorithms used in accelerometric gait based authentication, survey the landscape of domain-specific vulnerabilities, and also delve into the ethical aspects of this technology.

    Target audience: 1. Academic researchers working in the following areas: Biometrics, deep learning, motion-sensor based signal processing and gait analysis
    2. Product managers and Ethicists working on real-world biometric products

    Prerequisites:
    Basic knowledge of machine learning



    Tutorials session topics: (2.5 hours)
    Session 1: INTRODUCTION [30 minutes]

    • Introduction to human bipedal gait
    • Gait as a passive biometric: Surveying the modalities of measurement
    • Advantages over active biometrics: The "Loose once. Lose forever" pitfall
    • On MEMS and accelerometers: A brief review of the world of motion sensors
    • A survey of Gait segmentation approaches

    Section References:

    • "Maus, H-M; Lipfert, SW; Gross, M; Rummel, J; Seyfarth, A; ",Upright human gait did not provide a major mechanical challenge for our ancestors,Nature communications,1,1,1-6,2010,Nature Publishing Group
    • "More, Sagar A; Deore, Pramod J; ";,A survey on gait biometrics,World Journal of Science and Technology,2,4,146-151,2012,
    • "Connor, Patrick; Ross, Arun; ";,Biometric recognition by gait: A survey of modalities and features,Computer Vision and Image Understanding,167,1-27,2018,Elsevier
    • "Derawi, Mohammad Omar; ";,";Accelerometer-based gait analysis, a survey";,Nor Informasjonssikkerhetskonferanse NISK,1,2010,
    • "Cheng, Peng; Oelmann, Bengt; ";,Joint-angle measurement using accelerometers and gyroscopes-A survey,IEEE Transactions on instrumentation and measurement,59,2,404- 414,2010,IEEE
    • "Agostini, Valentina; Balestra, Gabriella; Knaflitz, Marco; ";,Segmentation and classification of gait cycles,IEEE Transactions on Neural Systems and Rehabilitation Engineering,22,5,946- 952,2013,IEEE
    • "Jiang, Shuo; Wang, Xingchen; Kyrarini, Maria; Graser, Axel; ,A robust algorithm for gait cycle segmentation,2017 25th European Signal Processing Conference (EUSIPCO),31-35,2017,IEEE
    • "Makihara, Yasushi; Sagawa, Ryusuke; Mukaigawa, Yasuhiro; Echigo, Tomio; Yagi, Yasushi; ";,Gait recognition using a view transformation model in the frequency domain,European Conference on Computer Vision,151-163,2006,Springer

    Break 1: 5 minutes

    Session 2: GAIT CLASSIFICATION [30 minutes]

    • The main datasets
    • The challenge of Multimodality: Device location, orientation changes, Device OS, Sampling frequency variations (iOS/Android), footwear
    • ML for Gait classification: Surveying the transition from shallow to deep
    • Privacy aware training
    • wo pipelines: Classification and Re-identification
    • Section References:

      • ";Gadaleta, M; Rossi, M; ";,Idnet: Smartphone-based gait recognition with convolutional neural networks (2016),arXiv preprint arXiv:1606.03238,290,
      • ";Vajdi, Amir; Zaghian, Mohammad Reza; Farahmand, Saman; Rastegar, Elham; Maroofi, Kian; Jia, Shaohua; Pomplun, Marc; Haspel, Nurit; Bayat, Akram; ";,Human gait database for normal walk collected by smart phone accelerometer,arXiv preprint arXiv:1905.03109,2019,
      • ";Takemura, Noriko; Makihara, Yasushi; Muramatsu, Daigo; Echigo, Tomio; Yagi, Yasushi; ";,Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition,IPSJ Transactions on Computer Vision and Applications,10,1,4,2018,Springer
      • ";Zeng, Wei; Chen, Jianfei; Yuan, Chengzhi; Liu, Fenglin; Wang, Qinghui; Wang, Ying; ";,Accelerometer- based gait recognition via deterministic learning,2018 Chinese Control And Decision Conference (CCDC),6280-6285,2018,IEEE
      • ";Yao, Shuochao; Zhao, Yiran; Zhang, Aston; Hu, Shaohan; Shao, Huajie; Zhang, Chao; Su, Lu; Abdelzaher, Tarek; ";,Deep learning for the internet of things,Computer,51,5,32-41,2018,IEEE
      • https://github.com/tensorflow/privacy
      • https://github.com/pytorch/opacus
      • Neverova, N., Wolf, C., Lacey, G., Fridman, L., Chandra, D., Barbello, B. and Taylor, G., 2015. Learning Human Identity from Motion Patterns. arXiv preprint arXiv:1511.03908.

      Break 2: 5 minutes

      Session 3: ROBUSTNESS [30 minutes]

      • Levels Types of security attacks: Mimicry, Audio injection attacks, Adversarial attacks, Response to non-gait kinematic shapelets
      • Out of distribution and scoring
      • Lifelong learning, catastrophic forgetting and model updating
      • Cross modality transfer learning

      Section References:

      • ";Michalevsky, Yan; Boneh, Dan; Nakibly, Gabi; ";,Gyrophone: Recognizing speech from gyroscope signals,23rd {USENIX} Security Symposium ({USENIX} Security 14),1053-1067,2014,
      • ";Trippel, Timothy; Weisse, Ofir; Xu, Wenyuan; Honeyman, Peter; Fu, Kevin; ";,WALNUT: Waging doubt on the integrity of MEMS accelerometers with acoustic injection attacks,2017 IEEE European symposium on security and privacy (EuroS&P),3-18,2017,IEEE
      • ";Wang, Jiayu; Yang, Aiying; Guo, Peng; Lu, Chenyan; Feng, Lihui; Xing, Chaoyang; ";,Experimental and Theoretical Study of Acoustic Injection Attacks on MEMS Accelerometer,2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI),1-6,2019,IEEE
      • ";Anand, S Abhishek; Wang, Chen; Liu, Jian; Saxena, Nitesh; Chen, Yingying; ";,Spearphone: A speech privacy exploit via accelerometer-sensed reverberations from smartphone loudspeakers,arXiv preprint arXiv:1907.05972,2019,
      • http://metalearning.ml/
      • ";Prabhu, Vinay; Tietz, Stephanie; Ta, Anh; ";,Classifying humans using deep time-series transfer learning: accelerometric gait-cycles to gyroscopic squats,2019.

      Break 3: 5 minutes

      Session 4: ETHICS [30 minutes]

    • Survey of real-world video-gait surveillance
    • Consensual enrollment
    • Authentication not surveil
    • Criminality detection, 'gender' classification and pseudo-science
    • Section References:

      • ";Bouchrika, Imed; ";,A survey of using biometrics for smart visual surveillance: Gait recognition,Surveillance in Action,3-23,2018,Springer
      • ";Benjamin, Ruha; ";,Race after technology: Abolitionist tools for the new jim code,Social Forces,2019,
      • ";Yu, Shiqi; Tan, Tieniu; Huang, Kaiqi; Jia, Kui; Wu, Xinyu; ";,A study on gait-based gender classification,IEEE Transactions on image processing,18,8,1905-1910,2009,IEEE
      • ";Ahmed, Mohammed Hussein; Sabir, Azhin Tahir; ";,Human Gender Classification Based on Gait Features Using Kinect Sensor,2017 3rd Ieee International Conference on Cybernetics (Cybconf),1-5,2017,IEEE
      • ";Randhavane, Tanmay; Bhattacharya, Uttaran; Kapsaskis, Kyra; Gray, Kurt; Bera, Aniket; Manocha, Dinesh; ";,The Liar's Walk: Detecting Deception with Gait and Gesture,arXiv preprint arXiv:1912.06874,2019,

      Recap and Q&A: 15 minutes


      Tutorial #2 Title: Secure the Face Analysis System: Recent Advances on Detecting Face Presentation Attacks and Digital Manipulation
      Allocated time: 12:30-15:00, Monday, September 28th 2020

      Abstract: Face is one of the most popular biometric modalities due to its convenience of usage in access control, phone unlock and etc. Despite the high recognition accuracy, face recognition systems are vulnerable to attacks from both physical world and digital world. Face spoof attacks, or presentation attacks, are the physical attacks that use fake faces to deceive the systems to recognize them as the real live person, e.g., photograph, screen. Face manipulation attacks are digital attacks that manipulate the digital content of the images before sending to the system, e.g., adversarial attacks, deepfake, face attribute editing. Those attacks would fool a face recognition system to accept a spoof/manipulated face to be one from a real human, which would expose the system to the security breach. Thus, to safely utilizing face recognition systems, techniques to detect and understand those physical and digital attacks are crucial before performing face recognition.

      This tutorial provides a comprehensive review on the recent advances of detecting face presentation attacks and digital manipulation attacks. We will cover topics from numerous attacks and sensors to methodology, databases, and evaluation protocols. Specifically, we mostly focus on solutions for RGB data but still discuss various alternatives such as NIR, and depth camera. Regarding to the methodology, we would not only provide the details of the SOTA methods, but also discuss their strength and weakness. In the end, we will discuss some existing problems and suggest some future research directions.

      Target participants: researchers who work on or have interested in face anti-spoofing and digital face manipulation

      Prerequisites for the participants: basic knowledge of computer vision and deep learning

      Expected enrollment: 50

      Topics: (3 hours)

      • Session 1: 50 minutes
        Face Anti-Spoofing: Detection and Visualization
        In this session, we first provide the introduction of attacks to face systems, including physical presentation attacks and digital manipulation attacks. Then, we introduce recent works that focus on the detection of face presentation attacks. Specifically, we analyze how recent works solve two essential problems: (i) how to improve the detection performance [1-4]; (i) how to provide visual understanding of model's decision [5-6].
      • Break: 15 minutes
      • Session 2: 50 minutes
        Face Anti-Spoofing Generalization
        In this session, we cover recent study of the generalization of the face anti-spoofing models. including domain adaption [7-9], few-shot/zero-shot learning [10-12], and anomaly detection [13]. We also want to point out some future direction for face anti-spoofing.
      • Break: 15 minutes
      • Session 3: 50 minutes
        Digital Face Manipulation
        In this session, we discuss the digital attacks specifically on face manipulation detection. We will start with reviewing the common benchmark databases and discuss the challenges in this problem. Then we will detail the representative approaches in face manipulation detection. Finally, the potential future approaches will be discussed as well.

      Tutors:

      Yaojie Liu: Yaojie Liu received his B.S. in Communication Engineering from University of Electronic Science and Technology of China, and M.S. in Computer Science from the Ohio State University. He is currently pursuing his Ph.D. at Michigan State University in the area of computer vision and deep learning. His research areas of interest are face representation & analysis, including face anti-spoofing, 2D/3D large pose face alignment, and 3D face reconstruction. He published four top-tier conference papers and filed one U.S. patent on face anti-spoofing. He served as the reviewer for numerous conferences and journals, including CVPR, ICCV, ECCV, AAAI, NeurIPS, TIP, TIFS, CVIU and Neurocomputing. He is a member of IEEE.

      Xiaoming Liu: Xiaoming Liu is an Associate Professor at the Department of Computer Science and Engineering of Michigan State University. He received the Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2004. Before joining MSU in Fall 2012, he was a research scientist at General Electric (GE) Global Research. His research interests include computer vision, patter recognition, biometrics and machine learning. He is the recipient of 2018 Withrow Distinguished Scholar Award from Michigan State University. As a co-author, he is a recipient of Best Industry Related Paper Award runner-up at ICPR 2014, Best Student Paper Award at WACV 2012 and 2014, and Best Poster Award at BMVC 2015. He has been an Area Chair for numerous conferences, including NeurIPS, ICLR, ECCV, ICCV, and CVPR. He is the Co-Program Chair of BTAS 2018, WACV 2018, and AVSS 2021 conferences and Co-General Chair of FG 2023 conference. He is an Associate Editor of Pattern Recognition journal, Pattern Recognition Letters, and IEEE Transaction on Image Processing. He is a guest editor for International Journal of Computer Vision (IJCV) Special Issue on Deep Learning for Face Analysis, and ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) Special Issue on Face Analysis for Applications. He has authored more than 150 scientific publications, and has filed 26 U.S. patents. His work has been cited over 10K times according to Google Scholar, and has an H-index of 49.

      Reference:

      [1] Y. Liu, A. Jourabloo, and X. Liu. Learning deep models for face anti-spoofing: Binary or auxiliary supervision. In CVPR, 2018.
      [2] Z. Wang, C. Zhao, Y. Qin, Q. Zhou, and Z. Lei. Exploiting temporal and depth information for multi-frame face anti-spoofing. CVPR. 2019.
      [3] Yu Z, Qin Y, Xu X, Zhao C, Wang Z, Lei Z, Zhao G. Auto-Fas: Searching Lightweight Networks for Face Anti- Spoofing. ICASSP, 2020.
      [4] Yu Z, Zhao C, Wang Z, Qin Y, Su Z, Li X, Zhou F, Zhao G. Searching central difference convolutional networks for face anti-spoofing. CVPR, 2020.
      [5] A. Jourabloo, Y. Liu, and X. Liu. Face de-spoofing: Anti-spoofing via noise modeling. In ECCV, 2018.
      [6] Stehouwer J, Liu Y, Jourabloo A, Liu X. Noise Modeling, Synthesis and Classification for Generic Object Anti- Spoofing. CVPR, 2020.
      [7] Shao R, Lan X, Li J, Yuen PC. Multi-adversarial Discriminative Deep Domain Generalization for Face Presentation Attack Detection. CVPR. 2019.
      [8] Shao R, Lan X, Yuen PC. Regularized Fine-grained Meta Face Anti-spoofing. AAAI. 2020.
      [9] Wang G, Han H, Shan S, Chen X. Cross-domain Face Presentation Attack Detection via Multi-domain Disentangled Representation Learning. CVPR. 2020.
      [10] Jia Y, Zhang J, Shan S, Chen X. Single-Side Domain Generalization for Face Anti-Spoofing. arXiv preprint arXiv:2004.14043. 2020 Apr 29.
      [11] Qin Y, Zhao C, Zhu X, Wang Z, Yu Z, Fu T, Zhou F, Shi J, Lei Z. Learning Meta Model for Zero-and Few- shot Face Anti-spoofing. AAAI. 2020.
      [12] Liu Y, Stehouwer J, Jourabloo A, Liu X. Deep tree learning for zero-shot face anti-spoofing. CVPR, 2019.
      [13] A. Costa-Pazo, D. Jimenez-Cabello, E. Vazquez-Fernandez, J. L. Alba-Castro and R. J. Lopez-Sastre.
      Generalized Presentation Attack Detection: a face anti-spoofing evaluation proposal. In arXiv. 2019.

      Recap and Q&A: 15 minutes

    Panel Session

    Panel Session 1 (60 minutes): Wednesday, 30th, 12:30pm - 1:30pm Houston time (CDT)
    Industry track panel discussion:

    Title: The impacts of privacy legislation upon the biometrics sector

    Overview: Recent years have seen the arrival of globally-reaching privacy legislation. Much of it impacts directly upon the biometrics sector, e.g. the General Data Protection Regulation (GDPR) within Europe and the California Consumer Privacy Act (CCPA) in the US. Privacy legislation places restrictions upon the collection, storage and processing of personal data, including biometric data, with potentially severe ramifications for non-compliance. Perhaps surprisingly, there is no universally accepted legal definition of privacy. Less surprising, then, is that the implications of privacy legislation upon our sector are often misunderstood and certainly open to interpretation.
    The panel aims to give IJCB participants from industry an opportunity to share their views on privacy legislation, its implications upon the biometrics sector, the viability of existing privacy preservation technology and the remaining challenges, as well as to foster dialogue with IJCB attendees not from industry. Leading figures from industry, academic privacy experts and regulatory authority representatives will present their perspectives and respond to audience questions regarding privacy legislation and biometrics.

    Panelists:
    • Jeff Lee, VP/Fellow, Biometrics, Cypress Semiconductor Corp., An Infineon Technologies Company
    • Thomas Zerdick, LL.M., Head of Technology and Privacy Unit, European Data Protection Supervisor
    • Catherine Jasserand, KU Leuven, Belgium
    • Andreas Nautsch, EURECOM, France
    • Vishwath Mohan, Android Biometrics Security Lead, USA
    Moderators:
    • Nicholas Evans, EURECOM, France
    • Kuntal Sengupta, Google, USA

    Panelist Bios:
    Jeff Lee is currently VP/Fellow within the subsystems business within Infineon's Cypress Semiconductor subsidiary. Jeff's areas of expertise include biometrics, applied security, product management, product security incident response leadership, and business development. Jeff has a long history defining and implementing biometric solutions including 13 years with AuthenTec/Apple with TouchID and Apple Pay, and over 5 years at Cypress Semiconductor with embedded fingerprint solutions consumer, industrial, and automotive applications. Beyond Jeffs' biometric responsibilities, Jeff manages Cypress' product security incident response team for the corporation. Highlighting Jeff's biometric experience was specifically architecting the small touch fingerprint sensor that became Apple's TouchID solution and resulted in Apple's first acquisition of a publicly traded company. Jeff has a bachelor's degree in electrical engineering from Rensselaer Polytechnic Institute and the holder of 6 US patents, 4 of which are related to biometrics, with 8 additional biometric patents pending.

    Thomas Zerdick, LL.M. is Head of the Unit & Technology and Privacy & in the office of the European Data Protection Supervisor. Previously, Thomas was Member of Cabinet for the European Commission's First Vice-President Timmermans with responsibilities in particular for issues relating to the Rule of Law and Fundamental Rights, including personal data protection. At the European Commission's Directorate-General (DG) for Justice and Consumers, Thomas was Acting Head of the Product safety and Rapid Alert System unit and Deputy Head of Unit of the Data Protection unit. Thomas was a key member of the team that prepared and negotiated the European Commission's data protection reform proposals, i.e. the General Data Protection Regulation (GDPR) and the Police Data Protection Directive between 2009 and 2016. He also held positions at the DG for Internal Market and the DG for Enlargement, including as an EU legal expert to the United Nations Good Offices Mission in Cyprus. Thomas studied Law at the University of Passau (Germany) and at the College of Europe in Bruges (Belgium). He was an German Rechtsanwalt (attorney) specialising in European Union law, IT law and personal data protection law, and Director of the German Bar Association's Brussels office. He publishes commentaries, books and articles on European Union law.

    Catherine Jasserand is a postdoctoral researcher at KU Leuven (Belgium), where she is a Marie Curie fellow. Since 2014, she has been researching the intersection between privacy/data protection and biometrics. She obtained her PhD degree at the University of Groningen (the Netherlands), an LL.M in new technology law at the University of California, Berkeley, and a Master's degree in European law at the University of Paris 1- Pantheon Sorbonne. She is qualified as a lawyer in France and in the United States (New York Bar). She has a particular interest in interdisciplinary research with computer scientists and engineers in the field of biometric recognition. She has written several conference papers, together with scientists, on facial recognition and speech. She has been invited to several technical conferences to present and discuss her research.

    Andreas Nautsch is with the Audio Security and Privacy research group at EURECOM in France. He received the doctorate from Technische Universitat Darmstadt in 2019, where he was with the biometrics research group within the German National Research Center for Applied Cybersecurity. He served as an expert delegate to ISO/IEC and as project editor of the biometric voice data interchange format standard (ISO/IEC 19794-13:2018). Andreas is member of the editorial board of the EURASIP Journal on Audio, Speech, and Music Processing, and a co-initiator and secretary/co-chair of the ISCA Special Interest Group on Security & Privacy in Speech Communication.

    Vishwath Mohan works on the Android Security team, and has been leading biometrics security for Android over the last 2 years, transitioning Android biometrics to a more attacker-aware measurement framework, and ensuring a flexible authentication model that allows less secure biometrics to be balanced with stricter constraints. He has spent time breaking various biometric modalities, and worries that voice synthesis is far too easy and only getting easier. In addition to biometrics, Vishwath,s expertise also extends to systems and product security, securing AI/ML, ambient computing, and offensive security.


    Panel Session 2 (90 minutes): Thursday, 1st October, 12:45pm - 14:15pm Houston time (CDT)
    Government Track Panel discussion

    Title: Biometrics in Government: From Research, Requirements, to Transition

    Description: The 2020 International Joint Conference on Biometrics will host a panel discussion on the research & development and deployment of biometrics capabilities for government applications. A panel of leading authorities from the government with diverse backgrounds from S&T to program management and operational deployment will be convened to share their perspectives with IJCB attendees. They will discuss important biometric research questions and capability gaps faced by their respective organizations in face recognition, fingerprint recognition, iris recognition, and other biometrics modalities such as voice, gait, and anthropometrics. They will discuss how requirements should inform & drive research, how research could inform drive requirements, and share their thoughts on how academia can team with industry and government to develop and transition biometrics technologies. Attendees will be given an opportunity to post questions and participate in a question and answer period to conclude the panel discussion.

    Panelists:
    • Mr. Chip Dever, PM DoD Biometrics
    • MAJ Will Taylor, U.S. Army
    • Dr. Lars Ericson, IARPA
    • Dr. Richard Vorder Bruegge, FBI
    Moderators:
    • Dr. Michael King (Florida Institute of Technology)
    • Dr. Shuowen (Sean) Hu (CCDC Army Research Laboratory)
    Panel Discussion Topics:
    • What are the critical research questions/challenges that need to be addressed to operationalize biometrics for government applications?
    • What are key biometrics research problem that your organization needs to be solved?
    • Should requirements/gaps/needs drive research or vice versa? How could basic research inform requirements and operations?
    • How can academia team with industry and government to more effectively develop and
    • What can researchers do to facilitate understanding and awareness of emerging technologies?
    • Q&A from audience

    Panelist Bios:
    Mr. Chip Dever had over 32 years of Army services including operational assignments as a Brigade Commander, Division Artillery Commander, Division Effects Coordinator, Division Targeting Officer and Joint Staff Deputy Comptroller. While serving as the Division Artillery Commander, Chip led and commanded at the brigade level deployed in Iraq. As DIVARTY CDR Chip lead the divisions targeting cell responsible for development of targeting folders for high value targets and synchronization of division, corps and above corps resources to interdict rocket, mortar and IED cells throughout the division's area of responsibility. In Chip's work as a contractor he spent more than 10 years in the Pentagon on the Army Staff working as the ARSTAF SME on Foreign Military Sales. Chip earned his BS from the University of Massachusetts, a M.S. from Lesley University and a M.S. from the United States Army War College. He has held a wide range of technical and professional industry certifications including Certified Information Security Manager (CISM), Lean Six Sigma Black Belt and Project Management Professional (PMP). Chip currently facilitates the Science & Technology program for PM DoD Biometrics.

    Major William Taylor has over 15 years of Army service, serving both operationally as an Infantryman and logistically as an Acquisition Officer. While in the Infantry, Will led and commanded at the platoon and company organizational levels as an officer and deployed three times to Iraq and Afghanistan. As an Acquisitions Officer, Will has served in PEO Aviation and PM-Special Programs, supporting numerous efforts such as small & large unmanned systems, robotics & autonomous systems, biometrics, and data analytics.

    Dr. Richard W. Vorder Bruegge is a Senior Photographic Technologist with the Federal Bureau of Investigation (FBI), Science and Technology Branch, where he is responsible for overseeing science and technology developments in the imaging sciences. He received the B.S. degree in engineering, the M.S. degree in geological sciences, and the Ph.D. degree in geological sciences from Brown University in 1985, 1987, and 1991, respectively. He has been with the FBI since 1995, where he has performed forensic analysis of image and video evidence, testifying in state, federal, and international courts as an expert witness over 60 times. His research interests include the forensic analysis of image evidence, with a particular interest in face recognition. He was a Chair of the Scientific Working Group on Imaging Technology from 2000 to 2006 and was also the Chair of the Facial Identification Scientific Working Group. He is a fellow of the American Academy of Forensic Sciences, and he was named a Director of National Intelligence Science and Technology Fellow in 2010.

    Dr. Lars Ericson is a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence. His areas of interest include biometrics, computer vision, sensors, and nanotechnology. Dr. Ericson has over fifteen years of research, development, test, and evaluation experience in the application of advanced technologies to help solve national security, defense, and criminal justice problems. He currently serves as the Program Manager for JANUS, a program on face recognition, and ODIN, a program on biometric presentation attack detection. Prior to joining IARPA as a Program Manager, Dr. Ericson was with SAIC where he supported IARPA as a subject matter expert in the field of biometrics. In addition, he was formerly with ManTech International Corporation where he served as the Director of the National Institute of Justice Sensors, Surveillance, and Biometric Technologies Center of Excellence. In his early research career, Dr. Ericson pursued basic and applied research in the field of nanotechnology, exploring novel materials and sensors at Rice University and the Naval Research Laboratory. He has a Bachelor's degree in physics from Gustavus Adolphus College and a Doctoral degree in Applied Physics from Rice University.

    Presentation instructions

    IJCB 2020 will be held as a virtual event within the WHOA conference platform this year. All accepted papers (Orals, Short Orals, Journal session papers) will have a virtual room at their disposal, where presenters will have the opportunity to present their work and interact live with the audience. To accommodate presenters from different time zones, the Virtual Room Session will be held as the first session of each day. Papers accepted for Oral presentation will also be discussed within an Oral Q&A session.

    PRESENTATION MATERIAL For each accepted paper (Orals and Short Orals), please prepare the following:

    • A summary video of the paper of 8-10 minutes in length.
    • A short teaser video of the paper of 1-2 minutes in length.
    • A slide presentation (in PDF format) of the paper.
    Video requirements: The summary and teaser videos should be of high quality (e.g., 720p) and stored in AVI or MPEG4 format. The summary video should not exceed 10 minutes and should ideally not be shorter than 8 minutes. The teaser video must not be longer than 2 minutes. The videos will be released three days before the scheduled presentations and will be available on-demand to the participants of IJCB 2020.
    Slide requirements: The presentation slides should be in PDF format. The first slide of the presentation should contain the paper title and an author list with associated affiliations. The name of the presenting author should be highlighted.The slides will not be available in WHOVA. Please make sure the slides are available on the computer you are using when loging into WHOA. During your presentations you can share your personal copy of the slides.

    SUBMISSION PROCEDURE AND DEADLINES
    The presentation material described above will be collected through CMT. For each accepted paper, the summary video, teaser video, and presentation slides should be uploaded as supplementary material of the paper. The file size limit for all three supplementary files is 350 MB. The presentation material needs to be uploaded to CMT by *** September 18, 2020 ***.

    Program Overview

    Starting times are given in Houston local time (UTC-5).

    Tuesday through Thursday the program starts at 8am Houston time (UTC-5), which is 10pm in Japan (UTC+9), 9pm in China (UTC+8), 6:30pm in India (UTC+5:30), 3pm in Europe (UTC+2) and 6am in San Francisco (UTC-7).

    Monday, September 28 th 2020
    10:00-12:30 Tutorial 1
    12:30-15:00 Tutorial 2
    12:00-15:00 Doctoral Consortium

    Tuesday, September 29 th 2020
    8:00-9:00 Virtual Rooms Session 1
    9:00-9:10 IJCB 2020 Opening
    9:10-10:10 Keynote Talk 1 (Hany Farid)
    10:10-10:30 Break
    10:30-11:30 Keynote Talk 2 (Nasir Memon)
    11:30-12:30 Keynote Talk 3 (Tal Hassner)
    12:30-13:00 Lunch Break
    13:00-13:45 Oral Session 1
    13:45-14:30 Special Session on Previously Published Journal Papers

    Wednesday, September 30 th 2020
    8:00-9:00 Virtual Rooms Session 2
    9:00-9:50 Oral Session 2
    9:50-10:00 Break
    10:00-11:00 Keynote Talk 4 (Mark Morse)
    11:00-12:00 Keynote Talk 5 (Surprise Speaker)
    12:00-12:30 Lunch Break
    12:30-13:30 Panel 1 (Industry)

    Thursday, October 1st 2020
    8:00-9:00 Virtual Rooms Session 3
    9:00-9:45 Oral Session 3
    9:45-10:00 Break
    10:00-10:30 Awards and Announcements
    10:30-11:00 IEEE Biometrics Council 2019 Best Dissertation Award Talk
    11:00-12:00 NASA Talk
    12:00-12:45 Lunch Break
    12:45-14:15 Panel 2 (Government)
    14:15-14:25 Closing

    For detailed program schedule click here

    IJCB 2020 Virtual Event Description

    General
    IJCB 2020 will be held as a virtual event this year within the WHOA conference platform. Similarly to in-person events the conference will have Keynotes and Panels, but also Oral and Virtual Rooms Sessions, during which authors of accepted papers will have the opportunity to present their work.

    Paper Presentations Formats
    Papers accepted for presentation at IJCB 2020 are grouped into Oral, Short Oral and Journal Session papers. Oral papers represent papers that were rated highest during the reviewing procedure (~11% acceptance rate this year). Oral papers will be presented during a Virtual Rooms Session and during a dedicated Oral Session. Short Oral papers are the equivalent of Posters in traditional conferences. These will be presented within a Virtual Rooms Session only. Journal Session papers are papers accepted for the Special session on Previously Published Journal papers and will also be presented as part of one of the Virtual Rooms Sessions and a dedicated special session. Authors of accepted paper are required to be present at the assigned Virtual Rooms, Oral and Journal paper Sessions.

    Presentation Material Availability
    Short 1-2 minute teasers as well as full length 8-10 minute videos of all accepted papers (i.e., Oral papers, Short Oral papers and Journal Session papers) will be released on the WHOA conference platform 3 days before the scheduled presentation. Participant will, thus, have to opportunity to watch the teaser and summary videos on their own time, identify papers of interest and then attend one of the live sessions during the conference, where they will be able to interact with the authors. Questions can be posted online for any paper at any time between the release of the presentation material and the scheduled presentation. The teaser and summary videos will remain available on demand to registered participants until the end of December 2020.

    Session Description

    Virtual Rooms Sessions
    Virtual Rooms Sessions represent the equivalent of traditional Poster Sessions. During the Virtual Rooms Sessions authors of all accepted papers (i.e., Orals, Short Orals, Journal Session papers) will be present in their virtual rooms, where they will be able to present their work and interact with the audience. Any questions posted online for a given paper will be forwarded to the authors one day before the presentation, so the presenters will be able to prepare. The 8-10 minute summary video will be at the disposal of the authors in their virtual room for presentation purposes Authors can also share any other presentation material (slides, teasers, etc.) from their local computers. The virtual rooms will be managed by the presenters and will not have any other hosts (e.g., a session chair) present. IJCB attendees will be able to join virtual rooms of their choice. Demo presenters and industry sponsors will also have their own virtual rooms.

    Oral Sessions
    Papers accepted for oral presentation will present their work at one of the assigned Oral sessions - in addition to the Virtual Rooms Session. The full-length summary videos of the oral papers will be available 3 days before the scheduled session, so conference participants will have the chance to watch these before the Oral session. During the Oral session, the 1-2 minute teaser videos will be played to the audience followed by a short 3-4 Q&A part for each paper. Similarly to the Virtual Rooms Session, any question posted online will be forwarded to the presenters one day before their Oral session. Attendees and session chairs will also be able to ask questions. All Oral sessions will be hosted by two session chairs that will introduce the authors, moderate the discussion and select questions for the presenters.

    Keynotes
    IJCB 2020 will have several keynote talks from visible researchers from our community. The keynote talks will be presented within dedicated keynote sessions, hosted by a session chair. During the session, a 45 minute video of the talk will first be streamed to the audience followed by a 15 minutes live Q&A session. The videos of the keynote talks will NOT be made available before the scheduled presentation and will be first released at the scheduled keynote session. Videos of the keynote talks will stay available (subject to consent of speakers) within the WHOA conference platform for on- demand viewing for all registered participants until the end of December 2020.

    Demonstration Description

    Demonstration Title: Ensemble of Deep Siamese Neural Network for Large Scale Kinship Verification in the Wild
    Demonstration Presenters
    Messaoud Bengherabi, Center for the Development of Advanced Technologies, Algeria Naidji Rami, and Kebbab Walid, The University of Algiers

    Demonstration Description:
    This demonstration is a proof-of-concept of kinship verification technology. Building on recent advances in kinship verification available from the fourth large-scale kinship recognition data competition, Recognizing Families in the Wild (RFIW2020), this demonstration uses optimized models that compare favorably with the best results obtained at RFIW2020 and performs well on new data, unseen as part of the RFIW2020 competition. The interactive demonstration will consist of a web application and a description of the technical approach.


    Demonstration Title: Dashcam Pay
    Demonstration Presenters: Sunpreet S. Arora, Visa Research

    Demonstration Description:
    This demonstration is prototype of a system that enables in-vehicle payments using face and voice biometrics. A dashcam mounted in the vehicle is used to capture face images and voice commands of passengers, which are then compared, in a privacy preserving fashion, to biometric data enrolled on the users' mobile devices over a wireless interface. The demonstration will consist of a discussion of the concepts, components, design and implementation, as well as a video of a real world example of a working prototype.


    Demonstration Title: User Initiated Face Recognition Flow that Holds No Data
    Demonstration Presenters: Nezare Chafni and Mosalam Abrahimi, Trueface

    Demonstration Description:
    This demonstration is a real-world product/application that shows how you can use modern cryptography coupled with advanced vector compression and quantization techniques to embed an encrypted face template into various mediums, such as a QR code, credit card chip, or NFC tag. The interactive demonstration will consist of a demonstration page that can be opened to the public for experimentation and a description of the technical approach.