FHDA-2025

Forensic Handwritten Document Analysis challenge dedicated to exploring state-of-the-art cross-modal authorship verification techniques

5
Teams Registered
200
Document Pairs
64.5%
Best Accuracy

πŸ† Challenge Overview #

Mission Statement

The Forensic Handwritten Document Analysis (FHDA) Challenge invites participants to tackle the binary classification task of determining whether a given pair of handwritten documents were authored by the same individual.

This competition introduces a unique cross-modal comparison between traditional handwritten documents (scanned) and documents written directly on digital devices such as tablets or graphic tablets.

Advanced AI Challenge

Develop state-of-the-art machine learning solutions for forensic document analysis using cutting-edge deep learning architectures.

Cross-Modal Analysis

Pioneer new techniques for comparing handwriting across different writing modalities and digital platforms.

Global Recognition

Gain international recognition at IEEE MetroXRAINE 2025, one of the premier conferences in measurement and analysis.

πŸ“… Challenge Timeline #

Important: Challenge Completed

The FHDA Challenge 2025 has successfully concluded with outstanding participation from research teams worldwide. Below you can find the complete timeline and final results.

March 31 - May 16, 2025 βœ“
Registration Period
Team registration and formation
April 14, 2025 βœ“
Training Set Release
Dataset availability for participants
June 16, 2025 βœ“
Test Set Release
Final evaluation dataset
June 20, 2025 βœ“
Submission Deadline
Final results submission
June 25, 2025 βœ“
Results Announcement
Final rankings and ground truth release
July 20, 2025 βœ“
Paper Submission
Winner paper submission deadline

πŸ† Final Rankings & Results #

Challenge Results - June 25, 2025

The FHDA Challenge 2025 has concluded! Below are the final rankings based on accuracy percentage achieved by each participating team. We congratulate all teams for their outstanding contributions to this challenge.

Rank
Team
Accuracy
Correct
Total
Submissions
Ground Truth
πŸ₯‡ 1st
JNU_DNSLAB πŸ† CHAMPION
64.5%
129
200
200
200
πŸ₯ˆ 2nd
SCLAB@CNU
62%
124
200
200
200
πŸ₯‰ 3rd
TJZ
55%
110
200
200
200

Ground Truth Data Available

The complete ground truth dataset used for evaluation is now publicly available for research purposes:

Download Ground Truth Dataset

This dataset contains the reference answers used to evaluate all team submissions during the FHDA Challenge 2025.

🎯 Challenge Objective

Primary Goal

Develop innovative deep learning architectures and novel algorithms to achieve superior accuracy in cross-modal authorship verification. The challenge focuses on the binary classification task of determining whether handwritten document pairs were authored by the same individual.

Evaluation Methodology

Teams are evaluated using a sophisticated weighted accuracy scoring system that rewards both correctness and confidence calibration. The final score combines the number of correct classifications with the probability confidence assigned to each prediction.

  • Binary classification accuracy
  • Confidence-weighted scoring
  • Cross-modal generalization
  • Robustness across writing styles

Dataset Characteristics

The challenge features a novel cross-modal dataset specifically designed to test the limits of current authorship verification technologies:

  • Traditional pen-and-paper documents
  • Digital tablet-written samples
  • Diverse handwriting styles and languages
  • Various writing instruments and conditions

πŸ“Š Dataset Information

Novel Cross-Modal Dataset

We present a dataset specifically curated for forensic handwritten document analysis, featuring unprecedented cross-modal document pairs.

Traditional Documents

High-resolution scanned documents written with various instruments on different paper types, representing real-world forensic scenarios.

Digital Documents

Documents created directly on digital devices including tablets, graphic tablets, and stylus-enabled devices.

Diverse Authors

Samples from multiple authors with varying handwriting characteristics, ages, and cultural backgrounds.

Technical Specifications

200 carefully curated document pair comparisons with comprehensive metadata and ground truth annotations.

πŸ‘₯ Participating Teams #

Global Participation

The FHDA Challenge 2025 attracted talented research teams from leading institutions worldwide, representing diverse approaches to forensic document analysis.

Team
Institution
Country
Lead Researcher
Team Members
JNU DNSLAB πŸ†
Chonnam National University
πŸ‡°πŸ‡· South Korea
Seyeon Jeong
Seyeon Jeong
Myeonghoon Lee
Hyeonsu Jung
Kyungbaek Kim
SCLAB@CNU πŸ₯ˆ
Chonnam National University
πŸ‡°πŸ‡· Republic of Korea
Hyung-Jeong Yang
Anjitha Divakaran
Dhivyaa SP
Hyung‑Jeong Yang
Myungeun Lee
TJZ πŸ₯‰
Keele University
πŸ‡¬πŸ‡§ United Kingdom
Tito Osadebey
Tito Osadebey
Job Kimeli
Zofie Dvorakova
Nadia Kanwal
Sangeeta Sangeeta
Richard
Zayed University, Abu Dhabi
πŸ‡¦πŸ‡ͺ United Arab Emirates
Richard Ikuesan
Richard Ikuesan
Hessa Almazrouei
Thani Al-Riyami Alremeithi
Rahaf Alnuaimi
Khalifa
RichardTheGoodGuy
Zayed University, Abu Dhabi
πŸ‡¦πŸ‡ͺ United Arab Emirates
Richard Ikuesan
Richard Ikuesan
Noura Alzaabi
Maryam Almarzooqi
Shaikha Alzaabi
Noor Aldahmani

πŸ“ Registration Information

Registration Closed

The registration period for FHDA Challenge 2025 has concluded. The challenge has been completed successfully with participation from research teams worldwide.

Registration Requirements (Completed)

Teams were required to register by May 16, 2025 with the following information:

  • Team name and institutional affiliation
  • Complete member details (names, emails, roles)
  • Research focus and methodology approach
  • Commitment to ethical research practices

Submission Process (Completed)

Final submissions included:

  • Technical documentation and methodology
  • Complete results in specified format
  • Confidence scores for each prediction
  • Source code and reproducibility materials

Future Challenges

Stay tuned for future FHDA challenges and related forensic document analysis competitions. Follow our repository and institutional websites for announcements.

🌟 Challenge Organizers #

Main Contact

Mirko Casu

Ph.D. Student

University of Catania, Italy

IEEE Student Member

For all inquiries regarding the challenge:

Email: mirko.casu@phd.unict.it

Mirko Casu
Mirko Casu
Ph.D. Student
University of Catania, Italy
Luca Guarnera
Luca Guarnera
Fellow Researcher
University of Catania, Italy
Sebastiano Battiato
Sebastiano Battiato
Full Professor
University of Catania, Italy

❓ Frequently Asked Questions

What was the main objective?

The challenge focused on developing innovative AI solutions for cross-modal authorship verification in handwritten documents, bridging traditional forensic analysis with modern digital writing technologies.

Who could participate?

The challenge was open to researchers, developers, and students from academic institutions and industry with expertise in machine learning, computer vision, and forensic document analysis.

How were submissions evaluated?

Evaluation used a sophisticated weighted scoring system combining classification accuracy with confidence calibration, rewarding both correctness and appropriate uncertainty estimation.

Where can I access the dataset?

Access to the complete dataset was granted exclusively to registered teams who had successfully completed the official registration process.

What were the main achievements?

The challenge achieved a peak accuracy of 64.5% by team JNU_DNSLAB, establishing new benchmarks for cross-modal forensic handwriting analysis and advancing the state-of-the-art.

How can I learn about future challenges?

Follow our GitHub repository, institutional websites, and the IEEE MetroXRAINE conference series for announcements about future forensic document analysis challenges.

πŸ“š Publication Opportunities

Research Impact & Publications

  • Winner Publication: The champion team has been invited to submit a comprehensive paper for IEEE MetroXRAINE 2025 proceedings
  • Conference Presentation: Top teams will present their innovative approaches at the conference
  • Journal Submission: A detailed challenge overview and analysis will be submitted to a top-tier journal

πŸ›οΈ Institutional Partners