Current Visitors:


RAPT: Pre-training of Time-Aware Transformer for Learning Robust Healthcare Representation

H. Ren, J. Wang, X. W. Zhao, and N. Wu

in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD'21)

With the development of electronic health records (EHRs), prenatal care examination records have become available for developing automatic prediction or diagnosis approaches with machine learning methods. In this paper, we study how to effectively learn representations applied to various downstream tasks for EHR data. Although several methods have been proposed in this direction, they usually adapt classic sequential models to solve one specific diagnosis task or address unique EHR data issues. This makes it difficult to reuse these existing methods for the early diagnosis of pregnancy complications or provide a general solution to address the series of health problems caused by pregnancy complications. In this paper, we propose a novel model RAPT, which stands for RepresentAtion by Pre-training time-aware Transformer. To associate pre-training and EHR data, we design an architecture that is suitable for both modeling EHR data and pre-training, namely time-aware Transformer. To handle various characteristics in EHR data, such as insufficiency, we carefully devise three pre-training tasks to handle data insufficiency, data incompleteness and short sequence problems, namely similarity prediction, masked prediction and reasonability check. In this way, our representations can capture various EHR data characteristics. Extensive experimental results for four downstream tasks have shown the effectiveness of the proposed approach. We also introduce sensitivity analysis to interpret the model and design an interface to show results and interpretation for doctors. Finally, we implement a diagnosis system for pregnancy complications based on our pre-training model. Doctors and pregnant women can benefit from the diagnosis system in early diagnosis of pregnancy complications.

 

RAPT: Pre-training of Time-Aware Transformer for Learning Robust Healthcare Representation
RAPT-KDD21.pdf
Adobe Acrobat Document 1.4 MB
The slides of "RAPT: Pre-training of Time-Aware Transformer for Learning Robust Healthcare Representation"
RAPT-KDD21.pdf
Adobe Acrobat Document 1.1 MB

 

The code for this paper is released in GitHub

@inproceedings{ren2021rapt,

    title={RAPT: Pre-training of Time-Aware Transformer for Learning Robust Healthcare Representation},

    author={Ren, Houxing and Wang, Jingyuan and Zhao, Wayne Xin and Wu, Ning},

    booktitle={Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},

    pages={3503--3511},

    year={2021}

}