SelfAT-Fold:

protein fold recognition based on residue-based and motif-based self-attention networks


Home Download Citation

Introduction

1. The similarities between the self-attention mechanism for natural language processing (a) and the self-attention mechanism based on the structure motifs for protein fold recognition (b). The lines represent the attention weights. The darker the line is, the higher the corre-sponding attention weight is.

2. The flowchart of the motif-based self-attention network (MSAN) for extracting fold-specific attention features. (a) the motif convolution layer, (b) the self-attention layer, and (c) the full connected layer.

3. The visualization of associations among structure motifs. (a) shows the global associations among 128 structure motifs on the LE dataset. (b) and (c) show the attention weights of protein folds 2_59 and 2_28, respectively, and the positions and their associations of the most important structure motifs in the 3D structures and primary sequenc-es. The locations of the structure motifs in the proteins were detected by the FIMO motif search tool in MEME suits, and the protein structures were visualized by PyMOL tool.