Abstract
In this work, we describe our efforts in addressing two typical challenges involved in the popular text classification methods when they are applied to text moderation: the representation of multibyte characters and word obfuscations. Specifically, a multihot byte-level scheme is developed to significantly reduce the dimension of one-hot character-level encoding caused by the multiplicity of instance-scarce non-ASCII characters. In addition, we introduce a simple yet effective weighting approach for fusing n-gram features to empower the classical logistic regression. Surprisingly, it outperforms well-tuned representative neural networks greatly. As a continual effort toward text moderation, we endeavor to analyze the current state-of-the-art (SOTA) algorithm bidirectional encoder representations from transformers (BERT), which works well in context understanding but performs poorly on intentional word obfuscations. To resolve this crux, we then develop an enhanced variant and remedy this drawback by integrating byte and character decomposition. It advances the SOTA performance on the largest abusive language datasets as demonstrated by our comprehensive experiments. Our work offers a feasible and effective framework to tackle word obfuscations.
Original language | English (US) |
---|---|
Pages (from-to) | 7014-7023 |
Number of pages | 10 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 34 |
Issue number | 10 |
DOIs | |
State | Published - Oct 1 2023 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications
Keywords
- Abusive language detection
- hate speech
- multibyte characters
- text moderation
- word obfuscations