Skip to main content

Open Access 25.04.2024 | Review

An overview of sentence ordering task

verfasst von: Yunmei Shi, Haiying Zhang, Ning Li, Teng Yang

Erschienen in: International Journal of Data Science and Analytics

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The sentence ordering task aims to organize complex, unordered sentences into readable text. This improves accuracy, validity, and reliability in various natural language processing domains, including automatic text generation, text summarization, and machine translation. We begin by analyzing and summarizing the sentence ordering algorithm from two perspectives: the input data approach and the implementation technique approach. Based on the different ways of input data formats, they are classified into pointwise, pairwise, and listwise, and the advantages, disadvantages and representative algorithmic features of each are discussed. Based on the different implementation technologies, we classify them into sentence ordering algorithms based on learning to rank and deep learning, and the core ideas, typical algorithms and research progress of these two categories of methods were specifically explained. We summarize the datasets and evaluation metrics of currently commonly used sentence ordering tasks. Additionally, we analyze the problems and challenges of sentence ordering tasks and look forward to the future direction of this field.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

With the rapid development of the Internet and artificial intelligence (AI), text generation has gained significant attention in the field of natural language processing (NLP). One important task in text generation is sentence ordering, which involves organizing sentences into coherent and readable text while ensuring coherence, text fluency, topical relevance, chronological order and the integrity of causal relationships [1].
The development of sentence ordering in the field of NLP has been influenced and driven by several aspects. With the advent of the information age, the rapid development of the Internet has led to the generation and storage of massive amounts of textual data, which has led to enormous information processing challenges. In such a context, it has become particularly important to organize and understand text content in an automated manner. Meanwhile, the rapid advancement of AI technology, especially in the fields of deep learning and natural language processing, provides stronger technical support for sentence ordering, enabling models to understand and generate natural language sequences more accurately. In addition, with the development of dialogue systems and intelligent assistants, the demand for models to be able to understand and generate natural language is increasing, and sentence ordering technology plays a key role in improving the performance and user experience of dialogue systems. Therefore, the development of sentence ordering is crucial to the field of NLP.
In the late twentieth century, the sentence ordering task first appeared in single-document summaries [2], and since then sentence ordering has been widely researched. In 1999, McKeown et al. [3] applied sentence ordering algorithms in multi-document summaries, which is different from single-document sentence ordering in that multi-document sentence ordering extracts and identifies a set of semantically similar corpus, and in the ordering, some judgments are required on their temporal coherence and thematic relevance [4] which need to make some judgment, increasing the sorting difficulty. Therefore, what is more popular in this period is the unsupervised learning-based sentence ordering algorithm, which manually identifies the features of each corpus and fits the ordering formula to achieve the coherence of the text, so as to achieve the purpose of sentence ordering, which is mainly classified into the ordering algorithms based on the probabilistic model [57] and entity grid [8, 9].
In the early twenty-first century, with the rise of machine learning (ML), many challenges such as data fusion and manual parameter tuning were addressed. As a result, researchers introduced the concept of learning to rank (LTR) [10], significantly improving the efficiency of sentence ordering tasks. Representative algorithms include SortNet [11], RankBoost [12], and RankNet [13].
With the development of deep learning (DL), the need to handle large-scale datasets has emerged. End-to-end approaches have gained significant application in the field of sentence ordering. This includes sentence ordering algorithms based on convolutional neural networks (CNN) [14], recurrent neural networks (RNN) [15], attention mechanisms [16], and bidirectional encoder representation from transformers (BERT) [17]. These approaches have contributed to the advancement of sentence ordering tasks.
Currently, researchers have proposed numerous concise and efficient algorithms to address sentence ordering problems. However, some researchers have only provided summaries of ordering learning algorithms. For example, Xiong et al. [18] introduced the research progress of the pairwise method in the field of ordering. In summary, there is currently no comprehensive literature review specifically focused on sentence ordering algorithms in NLP.
To achieve this, we conducted a comprehensive search for relevant papers using various sources, including Google Scholar, journal websites, Baidu Scholar, CNKI, and other professional websites. Additionally, we reviewed the references of the relevant papers to identify any additional sources. When searching for papers on websites, we used the query keyword ’sentence ordering’ for filtering. We have selected each paper on the basis of whether the paper makes constructive suggestions or improves the following issues, such as semantic understanding, long-range dependency, context dependency, and algorithmic efficiency improvement.
The content of this paper is organized as follows: in Section 1 we focus on the research background and significance of the sentence ordering task, briefly overview the state of the art of related research, and introduce the organizational structure of the paper. Section 2 classifies sentence ordering algorithms based on input data formats and implementation technologies and summarizes typical algorithms and research progress. Section 3 summarizes the datasets used for sentence ordering tasks. Section 4 outlines the evaluation metrics. Finally, Section 5 provides an outlook and summary of the sentence ordering task. Section 6 provides conclusion. Figure  1 illustrates the framework of this paper.

2 Sentence ordering algorithms

We analyze and summarize sentence ordering algorithm from two perspectives, namely input data formats and implementation techniques. According to the different input data methods, they are divided into pointwise, pairwise, and listwise; according to the different implementation techniques, they are divided into sentence ordering algorithm based on LTR and DL.

2.1 Methods categorization based on the input data formats

Sentence ordering algorithms can be categorized into three types based on the different input data methods: pointwise, pairwise, and listwise.

2.1.1 Pointwise

Pointwise approach converts the sentences in a document to feature vectors and then utilizes ordering algorithms to score the sentences. Based on the scores, the sentences are outputted in the order of their rankings. Existing pointwise algorithms can be classified into three categories: classification-based, regression-based, and ordinal regression-based. The core ideas and representative algorithms are summarized in Table 1.
Table 1
The differences between the three pointwise approaches are analyzed in terms of core ideas, inputs and outputs, and representative algorithms
Approach
Classification-based
Regression-based
Ordinal regression-based
Core idea
Transforming the ordering problem into a classification problem involves using a classification algorithm to output the probabilities of sentences in the document being at different positions. These probabilities are then weighted to perform the ordering.
Transforming the ordering problem into a regression problem involves learning a ordering function by minimizing the loss function on the training set.
By learning the true sequence relationships between sentences, a scoring function is obtained. The output of the scoring function is divided into different intervals to determine the positional relationships between sentences, resulting in a ordered list
Input
Discrete values
Continuous values
Ordered values
Output
Unordered values
Real values
Ordered values
Representative Algorithms
McRank [19]
Subset Ranking [20]
PRank [21]
The pointwise approach only considers the absolute relevance of individual sentences and is specific to each sentence. Since the ordering algorithm is learned through machine learning, pointwise can only determine better parameter combinations through continuous testing and is unable to directly compare or judge the relationships and relative order between sentences. Additionally, if there are errors in the previous ordering results, it can significantly affect the accuracy of subsequent ordering. To address these issues, researchers have proposed the pairwise and listwise approaches, which focus on handling the relative order between pairs of sentences and the overall order of the sentence list.

2.1.2 Pairwise

The pairwise approach [12, 13, 17, 22] involves combining any two sentences into pairs and transforming the ordering problem into a binary classification problem. In other words, a binary classifier is trained to determine the relative strength between two sentences, which determines their order. The core ideas and representative algorithms are summarized in Table 2.
Table 2
Representative algorithms for the pairwise and their respective algorithmic characteristics
Representative algorithms
Algorithm characteristics
Ranking SVM [22, 23]
Transforms the ordering problem into a binary classification problem by constructing sample-ordered pairs of data in the training set using the SVM
PKRank [24]
A Pairwise Kernel is introduced to extend the linear SVM to a nonlinear SVM, making the algorithm more scalable
RankNet [13]
Using the principle of gradient descent to build and apply models based on neural networks
LambdaRank [25]
Decomposition of the derivative of the loss function on the basis of RankNet algorithm and improvement of the parameter value Lambda by introducing normalized discounted cumulative gain (NDCG) evaluation metrics
RankBoost [12]
The boost method is used to combine a number of weakly sorted models to construct the final sorted model
CFBRS [26]
The algorithm solves the ordering problem by learning features of query-document pairs and by Enumeration of Integer Compositions
A Language+Vision Unary Model [27]
Embedding text using GRU and embedding images using CNN
ATTOrderNet+TwoLoss [28]
Enhancing the pointer network decoder with two pairwise ordered prediction modules (FUTURE and HISTORY)
BERSON [17]
Development of a new Relational Pointer Decoder that integrates relative order information into a network of pointers by means of a Deep Relational Module
B-TSort [29]
Determining sentence order using topological sorting based on depth-first search algorithms
Seq2seq+Pairwise [30]
Generating words by predicting the order of hidden vectors that update the generated words by combining the current word representation with previously constructed hidden vectors
The pairwise approach places more emphasis on the relative order between two sentences. It is suitable for sentence ordering tasks with a relatively small amount of data and clear pairwise features. It only requires binary classification for all sentence pairs, making it highly interpretable and having a smaller model size compared to listwise. However, it has the following three limitations: First, it only considers the order between two sentences, which may not be suitable if the relationships between all sentences are independent of each other. Second, the efficiency of ordering varies significantly for different query values, and the ordering results may be biased toward queries with more sentence pairs. Third, any two sentences can form a sentence pair for relevance calculation, disregarding the original sequential relationships and resulting in a loss function defined based on individual sentence pairs, lacking global context.

2.1.3 Listwise

The listwise approach considers all the sentences to be ordered as a single instance and trains a scoring function based on multiple instances to obtain an optimal solution. Depending on the optimization method, listwise algorithms can be categorized into two types: algorithms that directly optimize the loss function and algorithms that directly optimize the evaluation metrics.
2.1.3.1 Algorithms optimizing the loss function
These algorithms construct loss functions to optimize the sentence ordering list. They design different loss functions for different types of ordering tasks to directly reflect the inconsistencies between the predicted sequence and the true sequence in the sentence ordering algorithm. Representative algorithms can be found in Table 3.
Table 3
Representative algorithms for the algorithms optimizing the loss function method and their respective algorithmic characteristics
 
Representative
Algorithm characteristics
 
Algorithms
 
Algorithms optimizing the loss function
ListNet [31]
Using Kullback–Leibler (KL) Divergence to construct the loss function, the difference between the predicted document sequence and the true document sequence can be measured.
ListMLE [32]
Building upon the ListNet algorithm, the loss function is defined using log-likelihood.
StructRank [33]
Drawing inspiration from the ListMLE algorithm, an algorithm based on cumulative distribution network (CDN) is proposed, where the negative log-likelihood obtained from CDN is used as the loss function for ordering.
BoltzRank [34]
The conditional probability distribution of all query documents is determined using the Boltzmann distribution, and then the loss function is defined according to the ListNet algorithm or the ListMLE algorithm.
RankTxNet + ListMLE [35]
BERT is utilized for pretraining, and transformer networks are incorporated into the sentence encoder and paragraph encoder. Finally, the ListMLE algorithm is employed to optimize the loss function.
The algorithm that directly optimizes the loss function is similar to the handling of the loss function in pointwise and pairwise approaches. The idea behind all these approaches is to minimize the value of the loss function to improve the accuracy of sentence ordering. However, the essence of the sentence ordering task is to predict the overall order of the list of sentences to be ordered. Therefore, algorithms based on listwise direct optimization of the loss function demonstrate significantly better ordering performance compared to the other two types of algorithms.
2.1.3.2 Algorithms for optimizing evaluation metrics
This type of algorithm first defines the optimization goal of a specific evaluation metric and then selects an appropriate optimization algorithm to learn and construct the optimal ranking model, resulting in the best sentence sequence. Since evaluation metrics are often non-continuous and non-differentiable, they need to be transformed into a continuous and differentiable form. The following categorizes the algorithms that directly optimize evaluation metrics into two types:
  • One type of algorithm minimizing the upper bound of the fundamental loss function. For example, Yue et al. [36] proposed the SVMmap algorithm based on the support vector machine (SVM) framework to optimize the mean average precision (MAP) metric. It minimizes a hinge-based loss function, which serves as an upper bound of the basic loss function of MAP. Xu et al. [37] introduced the AdaRank algorithm based on the AdaBoost framework, minimizing an exponential loss function that bounds the basic loss function. Xu et al. [38] proposed the PermuRank algorithm, which uses a greedy algorithm to minimize the upper bound of the loss function.
    This type of algorithm aims to minimize the upper bound of ordering errors in the evaluation metric, allowing it to find the global optimal solution and achieve the best ordering results. However, the feasibility of these algorithms may decrease as the data volume increases.
  • Another type of algorithm aims to optimize the probability approximation of the sentence ordering evaluation metric. For example, Taylor et al. [39] proposed the SoftRank algorithm, which assumes that the sentence sequence is probabilistic rather than deterministic. It introduces an approximate normalized discounted cumulative gain (NDCG) called Soft NDCG and optimizes Soft NDCG to obtain the sentence ordering sequence.
    These algorithms aim to make the ranking approximation of the evaluation metric as close as possible to being continuous and differentiable. Since they optimize an approximate version of a specific evaluation metric, the resulting ranking list may have some level of approximation.
Compared to pointwise and pairwise approaches, listwise methods not only consider the absolute positional relationship of individual sentences and the relative positional relationship between sentence pairs but also take into account the overall positional and logical relationships among all the sentences in the list. As a result, listwise algorithms tend to achieve higher accuracy in sentence ranking. However, the complexity of the model and the training time greatly depend on the definition of the loss function or optimization objective for the given list of sentences, which can sometimes lead to issues of high complexity.

2.2 Methods categorization based on the implementation technologies

The sentence ordering algorithm can be categorized into two types based on the development of existing technologies: LTR-based algorithms and DL-based algorithms.

2.2.1 LTR-based sentence ordering algorithms

2.2.1.1 Concepts in LTR
During the 1990s, linguists conducted a series of studies on sentence ordering. They applied traditional ordering algorithms based on rules, probabilistic models [57], and entity grids [8, 9] to model sentence ordering. These methods can achieve sentence ordering at both syntactic and semantic levels, and can adjust parameters based on experience to obtain optimal results. However, the rapid growth of sentence features that need to be measured has resulted in an excessive number of parameters. This has led to an increase in the cost of manual tuning, a dramatic increase in workload, a decrease in the usability of tuning, and an increased risk of overfitting.
In such cases, researchers proposed learning to rank [10], which applies machine learning techniques used for solving classification and regression problems to sentence ordering. The general framework of LTR is illustrated in Fig. 2.
A standard sentence ordering training set consists of n queries \(q_{i}\in \left\{ i=1,2,...,n\right\} \), where each query consists of a set of relevant document features \(D_{i}\in \{D_{1},D_{2},...,D_{n}\}(D_{1}=\{d_{i,1},d_{i,2},\cdot \cdot ,d_{i,m}\},i=1,2,\cdot \cdot \cdot ,n)\). Unlike the test set, the training set provides the ground truth ranking list. The learning system then trains on the training set using pointwise, pairwise or listwise approaches to obtain a ranking model f(qd). Finally, the ranking system applies the learning model f(qd) to new queries \(q_{n+1}\) for prediction and outputs the final ranked list \(D_{\mathrm {n+1}}=\Bigl \{d_{\mathrm {n+1,1}},d_{\mathrm {n+1,2}},...,d_{\mathrm {n+1,m_{n+1}}}\Bigr \}\).
The emergence of LTR has addressed the challenges of data integration, automatic parameter tuning, and overfitting, significantly improving the efficiency of sentence ordering tasks. The core of LTR lies in the ranking model, which can be categorized based on the different techniques used to construct the ranking model. Figure  3 presents a schematic view of the categorization methods.
2.2.1.2 SVM-based sentence ordering algorithms
SVM, through supervised learning on a given training set, achieves binary classification of the dataset. The idea behind this category of ranking algorithms is to construct an optimal hyperplane that completely separates the data points with correct sentence ordering from those with incorrect sentence ordering, maximizing the separation margin. These algorithms can be further divided into linear and nonlinear ranking models, as detailed in Table 4.
Table 4
Representative algorithms for sentence ordering algorithm based on SVM, data input methods, and their respective algorithmic characteristics
Representative algorithms
Data input methods
Algorithm characteristics
SVMs [40]
Pointwise
SVM+Classification
RankSVM [22, 23]
Pairwise
Linear SVM
 
Pairwise
RankSVM + Primal Newton Method
PKRank [24]
Pairwise
A Pairwise Kernel + RankSVM
SVMmap [36]
Listwise
SVM+Hinge
PermuRank [38]
Listwise
Greedy algorithm
Joachims et al. [40] proposed RankSVM, a ranking algorithm based on SVM. This algorithm uses linear SVM to train the ranking model. However, it has the disadvantages of slow speed and incomplete training. In 2010, Chapelle et al. [23] introduced the Primal Newton Method to improve the speed of the RankSVM algorithm.
Due to the inability of linear SVM-based sentence ordering algorithms to capture the mutual influence between query-doc pairs, Suzuki et al. [24] extended linear SVM to non-linear SVM and proposed the PKRank algorithm. Building upon RankSVM [22], PKRank introduces pairwise kernel functions, making the algorithm more scalable.
The SVM-based ordering algorithms are widely used in sentence ordering tasks due to their strong generalization capability and ability to handle large amounts of features and data. However, the performance and accuracy of these algorithms are closely related to the selected document features. When the data volume is small, the algorithms tend to exhibit higher accuracy. However, as the data volume increases, the accuracy may decrease.
2.2.1.3 Neural network-based sentence ordering algorithms
The core of these algorithms is to construct an n-dimensional decision boundary that can determine whether the sentence ordering is correct based on the input list of sentences to be ordered. Representative algorithms in this category include RankNet, SortNet, and ListNet, as shown in Table 5.
Table 5
Representative algorithms for sentence ordering algorithm based on neural network, data input methods, and their respective algorithmic characteristics
Representative algorithms
Data input methods
Algorithm characteristics
PRank [21]
Pointwise
Perceptron + Ordinal Regression
RankNet [13]
Pairwise
Gradient Descent
LambdaRank [25]
Pairwise
Lambda
FRank [41]
Pairwise
Fidelity loss
SortNet [11, 42]
Pairwise
Self-adaptive model
 
Pairwise
CmpNN
ListNet [31]
Listwise
KL Divergence
ListMLE [32]
Listwise
Logarithmic Loss
StructRank [33]
Listwise
Cumulative distribution network
BoltzRank [34]
Listwise
Boltzmann
RankNet [13], proposed by Microsoft, is based on the idea of training a ranking function using a simple probability loss function. It utilizes neural networks and gradient descent optimization, making it suitable for any differentiable function and easy to train. Building upon this, Burges et al. [25] introduced the LambdaRank algorithm, which improves the computation of gradients by incorporating the NDCG evaluation metric, making the loss function closely aligned with the final evaluation metric. Tsai et al. [41] further improved the loss function in RankNet and proposed the FRank algorithm. By employing a fidelity loss function, FRank effectively learns the ranking function while mitigating the impact of extreme data points. It also introduces new properties that contribute to the ordering process, addressing the issue of the original loss function being susceptible to extreme data influences.
The aforementioned algorithms, led by RankNet, primarily transform the ordering problem into a binary classification problem. However, Rigutini et al. [11] proposed an alternative neural network-based adaptive ordering algorithm called SortNet, which transforms the ordering problem into a multi-class classification problem. This algorithm utilizes neural networks to construct comparators and maximizes the cross-entropy between predictions and the true ordering order for ordering. Later, they improved the comparators by using a comparative neural network (CmpNN) [42] to train the ordering function, enabling the global ordering of a set of data.
ListNet-based algorithms [3134] are similar to RankNet [13], with the main difference being that ListNet uses lists of sentences as instances and utilizes the listwise loss function, while RankNet uses pairs of sentences as instances and utilizes the pairwise loss function. Experiments have shown that as the number of sentences to be sorted increases, ListNet achieves significantly higher accuracy than RankNet, while also having lower time complexity.
NN-based ordering algorithms have strong learning capabilities and can be applied to various ordering scenarios. However, they require a large amount of data and parameter tuning. The training process can be slow, and there is a risk of overfitting. These algorithms may not be robust to outliers in the dataset, and their interpretability and generalization ability can be limited.
2.2.1.4 Boosting-based sentence ordering algorithms
The main idea of boosting is to combine multiple weak ranking models into a strong ranking model. Representative algorithms include AdaBoost and gradient boosted trees (GBT)-based sentence ordering algorithms. Please refer to Table 6 for more details.
Table 6
Representative algorithms for sentence ordering algorithm based on Boosting, data input methods, and their respective algorithmic characteristics
Representative algorithms
Data input methods
Algorithm characteristics
McRank [19]
Pointwise
GBT+Classification
RankBoost [12]
Pairwise
AdaBoost
RankBoost+ [43]
Pairwise
The Binary Weak Ranker
GBRank [44]
Pairwise
GBT
iGBRT [45]
Pairwise
RF+GBRT
CFBRS [26]
Pairwise
Enumeration of Integer Compositions
LambdaMART [46]
Listwise
Lambda+Boosting
AdaRank [37]
Listwise
Boosting
Freund et al. [12] proposed the RankBoost algorithm, a ranking algorithm based on AdaBoost. It trains multiple weak ranking models and combines them to create a strong ranking model, optimizing the classification errors between sentence pairs. Connamacher et al. [43] introduced a new method for combining weak ranking models, called weighted exponential function, which improves the performance of weak models in RankBoost and shows promising results in both theory and practice. Wu et al. [37] improved the weight adjustment method in RankBoost using Lambda and proposed the LambdaMART algorithm, which provides more accurate estimation of each sentence’s contribution and thus improves the accuracy of the ranking.
The weak ranking models in the mentioned algorithms utilize decision tree-based rankers, which are more suitable for handling nonlinear relationships and missing values. They have strong interpretability. However, when the dataset size increases, they may suffer from overfitting and lack robustness. To address these issues, researchers have employed weak ranking models based on regression trees.
Zheng et al. [44] proposed GBRank, a ordering algorithm based on GBT. They employed the GBT framework, originally designed for regression problems, to tackle the ordering problem of sentence pairs. Mohan et al. [45] extended GBRank by introducing gradient boosted regression trees (GBRT) and combining them with random forest (RF). They used the ordering function learned from RF as the initialization function for GBRT, effectively reducing overfitting.
Compared to ordering algorithms based on NNs, the ordering algorithms discussed in this section have fewer parameters, are less prone to overfitting, and perform well in document feature selection. Since the errors of weak ordering models are typically random, combining multiple weak ordering models can reduce the overall error rate of the model, thereby improving the accuracy of the algorithm. Moreover, these algorithms exhibit good scalability. It is crucial to focus on how to effectively combine weak learners, select appropriate loss functions, and optimize generalization error in this class of algorithms.
2.2.1.5 Bayesian-based sentence ordering algorithms Bayesian algorithms are a class of algorithms that utilize probability and statistical knowledge for classification. In the context of Bayesian sentence ordering algorithms, the algorithm integrates prior information and data information from the set of sentences to be ordered. It then uses the Bayesian formula to calculate posterior information. Finally, based on the obtained posterior information, it infers the ordering sequence of the set of sentences to be ordered.
One of the most groundbreaking research achievements in this category is the Bayesian Personalized Ranking (BPR) algorithm proposed by Rendle et al. [47]. It combines Bayesian principles with matrix factorization techniques for personalized ranking. Compared to traditional matrix factorization algorithms, BPR can effectively handle missing values in the data. By constructing partial order relations, BPR establishes a complete partial order relation for each document, leading to more accurate ordering lists. Subsequent research has made improvements on the BPR algorithm to enhance its ordering performance. Please refer to Table 7 for related research.
Table 7
Representative algorithms for sentence ordering algorithm based on Bayesian, data input methods, and their respective algorithmic characteristics
Representative algorithms
Data input methods
Algorithm characteristics
BPR [47]
Pairwise
Matrix Factorization+Implicit Feedback
BPR++ [48]
Pairwise
Graded Implicit Feedback
BPRH [49]
Pairwise
Heterogeneous Implicit Feedback
AT-MBPR [50]
Pairwise
Adversarial training
Lerche et al. [48] extended BPR to handle more fine-grained, higher retrieval accuracy, and multi-level preference relations. This algorithm is called Graded Implicit Feedback for Bayesian Personalized Ranking (BPR++). It expands the binary decision to a multivariate setting, allowing for more nuanced preferences.
Qiu et al. [49] proposed Bayesian Personalized Ranking for Heterogeneous Implicit Feedback (BPRH), which takes into account the co-occurrence of different types of sentences to quantify their relevance. Based on this analysis, it studies the preference differences in the relevance between different features in the sentences. The preferences of different features are integrated into the BPR model in descending order, forming the BPRH framework.
Wang et al. [50] proposed Adversarial Training-Based Mean Bayesian Personalized Ranking (AT-MBPR), which is based on Mean Bayesian Personalized Ranking (MBPR) and divides the feedback information into three categories. It obtains implicit feedback from the average value of each document and unrelated documents. Finally, it reduces noise through adversarial training.
Bayesian-based ordering algorithms are relatively easy to understand and interpret. They have fast overall algorithmic execution and are relatively straightforward to construct. They also achieve high classification accuracy. However, this algorithm requires computation of prior probabilities and has stricter requirements for the representation of input data.

2.2.2 DL-based Sentence Ordering Algorithms

In recent years, the development of dl has led to significant advancements in the field of NLP, especially with the emergence of pretrained language models such as BERT [51]. These models have brought new breakthroughs to the field of sentence ordering. Based on the training approach for sentences, they can be categorized into two types: algorithms without pretrained language models and algorithms based on pretrained language models. For a detailed comparison of specific algorithms, evaluation metrics, and datasets, please refer to Table 8.
Table 8
Sentence ordering algorithm based on deep learning, summarizing the differences in data input methods, evaluation metrics, and training datasets among different representative algorithms
 
Representative algorithms
Data input methods
Shus
Datasets
Sentence Ordering Algorithms without Pretrained Language Models
CNN/LSTM/CBoW + Beam Search [14]
Listwise
Rouge-S, Rouge-N, P-all
arXiv abstract
 
CNN/LSTM/CBoW + PtrNet [52]
Listwise
PM, LSR, PMR
arXiv abstract, SIND caption
 
Seq2seq [53]
Pairwise
Accuracy
Accident, Earthquake
 
RNN+PtrNet [15]
Pairwise
Accuracy, \(\tau \)
NIPS/AAN/NSF abstracts
 
REFRESH [54]
Listwise
Rouge-L
CNN, DailyMail
 
ATTOrderNet [16]
Listwise
Accuracy, \(\tau \), PMR
Accident, Earthquake, NIPS/AAN abstract, arXiv abstract, SIND caption
 
HAN [55]
Pairwise
\(\tau \), PMR
arXiv abstract, VIST dataset, ROCStory
 
ATTOrderNet+ TwoLoss [28]
Pairwise
\(\tau \),PMR
arXiv abstract, SIND captions, ROCStory
 
SE-GRN [56]
Listwise
Accuracy, \(\tau \), PMR
NIPS/AAN abstracts, arXiv abstracts, SIND captions
 
IRSEGraph [57]
Listwise
Accuracy, \(\tau \), PMR
NIPS/AAN abstracts, arXiv abstracts, SIND captions, ROCStory
 
TGCM12 [58]
Pairwise
Positional Accuracy, \(\tau \), PMR
NIPS/AAN/NSF abstracts, arXiv abstracts, SIND captions
Sentence Ordering Algorithms with Pretrained Language Models
B-TSort [29]
Pairwise
Rouge-S, Accuracy, \(\tau \), PMR, LCS
NIPS/AAN/NSF abstracts, SIND captions
 
RankTxNet+ ListMLE [35]
Listwise
\(\tau \), PMR
NIPS/AAN/NSF abstracts, arXiv abstracts, SIND captions, ROCStory
 
BERSON [59]
Pairwise
Accuracy, \(\tau \), PMR
NIPS/AAN/NSF abstracts, arXiv abstracts, SIND captions, ROCStory
 
BERSON+Pairwise strategies [60]
Pairwise
Accuracy, \(\tau \), PMR
AAN abstracts, SIND captions, ROCStory
 
SCM [61]
Listwise
\(\tau \), PMR
ROCStory
 
BERT4SO [62]
Listwise
\(\tau \), PMR
NIPS/AAN/NSF abstracts, SIND captions, ROCStory
 
ConsGraph [63]
Listwise
\(\tau \), PMR
NIPS/AAN/NSF abstracts, SIND captions, ROCStory
 
Pruned Graph [64]
Listwise
\(\tau \), PMR
ROCStory
 
BOID [65]
Listwise
Accuracy, \(\tau \), PMR
NIPS/AAN abstracts, arXiv abstracts, SIND captions
 
Re-BART [66]
Listwise
Accuracy, \(\tau \), PMR
NIPS/AAN/NSF abstracts, arXiv abstracts, Movie Plots, SIND captions, ROCStory
2.2.2.1 Sentence ordering algorithms without pretrained language models
Since 2016, sentence ordering algorithms have shifted toward deep learning-based approaches. Most researchers use methods such as CNN or RNN to construct sentence encoders that encode the input sentences. Pointer networks (Ptr-Net) [67] have also been introduced as decoders to predict the sequence of sentences.
Chen et al. [14] proposed a ordering algorithm based on Continuous Bag of Words (CBoW), CNN, and Long Short-Term Memory Network (LSTM). They trained the algorithm using sentence pairs and employed Beam Search to find the optimal ordering of sentences. Gong et al. [52] extended this approach by introducing Ptr-Net as a decoder. Ptr-Net mitigates the problem of error propagation and utilizes the entire contextual information to determine the likelihood of each sentence’s position, resulting in an improved ordering list. Narayan et al. [54] presented an extractive ordering algorithm based on CNN and LSTM, which ensures that the ordered list does not go beyond the original text, but it may not guarantee comprehensive extraction of key information. Logeswaran et al. [15] proposed an end-to-end unsupervised deep learning algorithm using RNN for sentence ordering. They determine the likelihood of a sentence at each position to derive the optimal ordering list.
The use of reinforcement learning (RL) in ordering algorithms initially gained widespread application and achieved good performance in the field of search ranking [68]. Subsequently, researchers utilized RL to dynamically adjust the subsequent ordering based on positional feedback from already ordered sentences, applying it to sentence ordering tasks. Narayan et al. [54] introduced RL to globally optimize the evaluation metric ROUGE. By dynamically adjusting ROUGE, the model was able to better differentiate sentences and order them to obtain high-scoring sentences that are most suitable for summarization. This approach represents a shift from prescoring followed by ordering to dynamic order decision-making.
The mentioned algorithms mostly employ an encoder-decoder structure to learn and predict the sentence ordering. Cui et al. [16] proposed a new structure that consists of a sentence encoder, a paragraph encoder, and a decoder. The sentence encoder utilizes bi-directional long short-term memory networks (BiLSTM) to encode the sentences. The paragraph encoder employs self-attention mechanisms to capture global dependencies between sentences and obtain a reliable representation of the dataset. Finally, Ptr-Net is used to generate the ordered sequence. Based on this model, several variations have been proposed by researchers. Wang et al. [55] combined a hierarchical attention-based encoder with a decoder based on masked self-attention. Yin et al. [28] utilized two paired ordering prediction modules (FUTURE and HISTORY) to enhance the Ptr-Net decoder. Yin et al. [56] employed graph recurrent networks (GRN) to model the relationships between sentences and entities. Lai et al. [57] developed the IRSEGraph ordering algorithm based on the GRN model, which combines pairwise ordering models with sequence models to achieve complementary advantages.
Deep learning plays a crucial role in the development of sentence ordering tasks. Compared to traditional ordering algorithms, deep learning-based sentence ordering algorithms can learn the semantics and contextual information of sentences, enabling better ordering. They can capture the relationships between sentences more accurately, thus improving the accuracy of ordering. They also allow for end-to-end learning, making the training and usage of the algorithms simpler and more efficient. Additionally, deep learning-based sentence ordering algorithms can further enhance the performance through various techniques. For example, the use of attention mechanisms can focus on important parts, thereby improving the accuracy of ordering. Reinforcement learning can enable dynamic ordering decisions, leading to improved overall ordering performance.
2.2.2.2 Sentence ordering algorithms with pretrained language models
With the emergence of issues such as larger model sizes, increasing parameter counts, and expanding application domains, traditional deep learning algorithms face challenges in terms of poor generalization, long training times, and high data requirements.
In response to these challenges, researchers have proposed pretrained language models, with BERT being a prominent example. The introduction of pretrained language models has greatly facilitated advancements in the field of NLP. As a result, researchers have started incorporating pretrained language models into the task of sentence ordering.
Kumar et al. [35], based on the ATTOrderNet algorithm, employed BERT for pretraining. They incorporated transformer networks into both the sentence encoder and the paragraph encoder. Finally, they optimized the loss function using the ListMLE algorithm. Li et al. [59] proposed BERSON, a sentence ordering algorithm within paragraphs based on BERT. They developed a novel relational pointer decoder (RPD) and integrated the relative ordering information into Ptr-Net through a deep relational module (DRM), enhancing both local coherence and global dependencies among sentences. Manku et al. [60], building upon BERSON, utilized a pairwise model for prediction and replaced BERT with ALBERT [69]. This modification led to improved ordering performance.
Although the aforementioned algorithms have achieved excellent ordering performance, they still suffer from the issues of large model size and slow prediction speed. In order to address these concerns and improve the overall speed of the algorithm, Golestani et al. [61] proposed the Sentence Correlation Measure (SCM) algorithm. By employing SBERT-WK5 to obtain sentence embedding vectors, they calculated the similarity scores of sentence pairs using cosine similarity and then performed brute-force ordering. This approach aims to reduce model size and enhance algorithmic efficiency.
Differing from the previous algorithms that individually encoded each sentence, Zhu et al. [62] introduced the BERT4SO algorithm by fine-tuning BERT. They connected all the sentences requiring ordering into a long sequence and utilized the [CLS] token to denote sentence boundaries, aiming to capture the interaction information between sentences. Furthermore, Zhu et al. [63] considered multi-granularity relative ordering information. They analyzed that sentence positions at different distances might rely on different types of information. To address this, they employed a graph neural network (GNN) to integrate the relative ordering information into sentence representations, which were then used for the final ordering prediction.
The sentence ordering algorithm based on SE-Graph [56] has shown promising results for long texts. However, its performance is not satisfactory for short texts. In order to address this issue, Golestani et al. [64], in addition to incorporating BERT as a pretrained language model, also applied pruning techniques to the obtained graph relation network. This approach aims to mitigate the problem of excessively high correlations between sentences in short texts.
In contrast to the aforementioned algorithms that predict the sentence ordering sequence unidirectionally from start to end, Bai et al. [65] proposed a bidirectional ordering algorithm based on Ptr-Net. This algorithm can simultaneously predict the sentence ordering in both forward and backward directions, allowing for mutual influence between the two directions. This approach helps alleviate the issue of error accumulation during the ordering process and has achieved SOTA (state-of-the-art) performance.
In addition to utilizing BERT and its variants [69, 70] for constructing sentence ordering algorithms, Chowdhury et al. [66] proposed the use of the pretrained language model BART (bidirectional and auto-regressive transformers) [71] for sentence ordering tasks, which has also shown impressive performance.
The importance of pretrained language models in sentence ordering tasks is evident. Introducing pretrained language models allows for the utilization of existing language knowledge and data through transfer learning and fine-tuning. This enables better learning of general features in the data and enhances the understanding of semantic and contextual information in the text. Not only does it help avoid overfitting on small-sized datasets, but it also improves ordering accuracy, provides better generalization performance, and facilitates faster training speed.

3 Datasets utilized for the Sentence Ordering Algorithms

For sentence ordering algorithms, obtaining high-quality datasets as a foundation is crucial for achieving desirable ordering results and improving both the training quality and prediction accuracy of the models. Existing sentence ordering datasets can be classified into two categories: public datasets and domain-specific datasets constructed based on specific task requirements. In order to assess the performance of algorithms and conduct comparative experiments, it is generally preferred to prioritize the use of public datasets.

3.1 Public Datasets of LTR

The LRT sentence ordering algorithm commonly utilizes three public datasets [72], as shown in Table 9.
Table 9
Public dataset for learning to rank. Compare the differences between different public datasets in terms of composition, query, number of documents, number of features, dimension type, and evaluation metrics
Datasets
Composition
Query
Document
Feature
Dimension type
Evaluation metrics
Published
LETOR
1.0 [73]
 
2007.4
2.0 [73]
3
P@k
2007.12
3.0-Gov [73]
7
575
568K
64
2
MAP
2008.12
3.0-Ohsumed [73]
7
106
16,140
45
3
NDCG
2008.12
4.0 [74]
8
2476
85K
46
3
 
2009.1
Yahoo! [75]
Set1
Train
19,944
473,134
 
5
NDCG
2010.3
Valid
2994
71,083
519
   
Test
6983
165,660
    
Set2
Train
1266
34,815
    
Valid
1266
34,881
596
   
Test
3798
103,174
    
Microsoft
MSLR-WEB30k
\(S_{1}-S_{5}\)
31,531
136
5
NDCG
2010.5
       
MAP
 
 
MSLR-WEB10k
\(S_{1}-S_{5}\)
10,000
136
  
2010.5
1.
LETOR Datasets
Microsoft Research Asia has released LETOR 1.0 to 4.0 versions, with LETOR 3.0 and 4.0 being the most commonly used public datasets in the field of learning to rank. The official website for LETOR is http://​research.​microsoft.​com/​en-us/​um/​beijing/​project-s/​letor/​.
The LETOR 3.0 dataset [73] is formed by combining four additional datasets with the existing three datasets. Each dataset has three versions, and each version is further divided into five parts. From these, three datasets are randomly selected as the training set, one as the validation set, and another as the test set.
The LETOR 4.0 dataset [74] is divided into four categories, with each category consisting of two datasets, making a total of eight datasets. Each dataset is further divided into five parts, and each part contains three subsets: the training set, validation set, and test set. Fivefold cross-validation can be applied to this dataset.
 
2.
Yahoo! Learning-to-Rank Challenge Datasets
The dataset [75] you mentioned is sourced from the Learning-to-Rank Challenge organized by Yahoo Labs. It consists of two datasets, each divided into three groups: the training set, validation set, and test set. The official website for this dataset is http://​learningtorankch​allenge.​yahoo.​com/​.
 
3.
Microsoft Learning-to-Rank Datasets
The dataset is released by Microsoft Research Asia and consists of MSLR-WEB30k and randomly sampled MSLR-WEB10k. Each dataset is divided into five parts: S1, S2, S3, S4, and S5, which are used for five-fold cross-validation. You can find more information on the official website at https://www.microsoft.com/en-us/research/project/mslr/.
 
The above three public datasets commonly used in LRT are different in terms of data complexity and application scenarios in addition to their different sources. The LETOR datasets are usually larger and has higher complexity, and are mostly used for academic research and algorithm development. The other two datasets are relatively smaller in size and complexity and are mostly used for practical business research, which is closer to real application scenarios.

3.2 Public Datasets of DL

There are eight commonly used public datasets for sentence ordering algorithms based on dl, as shown in Table 10.
Table 10
Public dataset for deep learning. Compare the differences between different public datasets in terms of composition, average number of sentences, average number of words, and vocabulary
Datasets
Train
Valid
Test
Average Number of Sentences
Average Number of Words
Vocabulary
Accident [8]
2095
2087
11.5
4758
Earth [8]
1896
-
2056
10.4
3287
quake
      
NIPS [15]
2248
409
402
6
16,721
abstract
      
AAN [15]
8569
962
2626
5
34,485
abstract
      
NSF [15]
96,017
10,185
21,573
8.9
334,090
abstract
      
ArXiv [14]
884,912
110,614
110,615
5.38
134.65
64,557
abstract
      
SIND [76]
40,155
4990
5055
5
56.84
30,861
caption
      
ROC [77]
78,529
9816
9817
5
50.00
Story
      
1.
Accident, Earthquake
The accident dataset [8] is sourced from the National Transportation Safety Board’s aviation accident database, while the earthquake dataset [8] is derived from Associated Press articles on the topic of earthquakes. Each dataset consists of 100 documents of comparable length, with average sentence counts of 11.5 and 10.4, respectively. Both datasets include training and testing sets. These two datasets cover different subject areas and have distinct data sources and content. Therefore, they are primarily used to validate and analyze the generalization performance of the models when in use.
 
2.
NIPS abstract, AAN abstract, NSF abstract
The NIPS abstract dataset [15] is sourced from the abstracts of NIPS papers from 2005 to 2015. The dataset comprises 3,259 abstracts, with training data spanning from 2005 to 2013, validation data from 2014, and testing data from 2015. The dataset contains numerous high-quality paper abstracts in the fields of machine learning and computational neuroscience. The consistent language style and terminology facilitate the learning of sentence ordering models. However, the dataset’s subject matter is limited and cannot be applied to sentence ordering tasks in other fields.
The AAN abstract dataset [15] is sourced from the ACL Anthology Network (AAN), specifically from the selected papers of the ACL conferences from 2010 to 2013. The dataset consists of 12,157 abstracts, with training data from 2010, validation data from 2011, and testing data from 2012 to 2013. The dataset encompasses a broad spectrum of abstracts from research papers in the field of computational linguistics, making it suitable for various studies on sentence ordering tasks. However, it is important to note that the dataset may contain some noise and inconsistencies and therefore requires additional data cleaning and preprocessing.
The NSF abstract dataset [15] is sourced from the abstracts of research papers funded by the National Science Foundation (NSF) from 1990 to 2003. The dataset comprises 127,865 abstracts and differs from the previous datasets as it covers various scientific disciplines and contains relatively longer abstracts. The training data span from 1990 to 1999, the validation data is from 2000, and the testing data covers the years 2001 to 2003. Compared to the previous two datasets, the NSF abstracts dataset covers abstracts of research projects across multiple scientific domains and is relatively long, making it suitable for interdisciplinary research and for research on sentence ordering tasks for scientific text analysis. However, the text in the dataset is more diverse, having significant differences between different fields, requiring domain adaptation processing.
 
3.
arXiv abstract
The arXiv abstract dataset [14] includes abstracts from papers published on the arXiv website in seven fields, including statistics, computer science, and mathematics. Each document consists of 2 to 20 sentences, with an average word count of 134.65 per document. The dataset is divided into training, validation, and testing sets. The validation set is randomly selected as the first 10% of the dataset, the testing set is the last 10%, and the remaining portion is used for training. The dataset is highly real-time and can reflect the latest research progress, which helps to train the model’s ability to reflect current research trends. However, as the arXiv abstract dataset covers a number of subject areas, research in certain areas may be more active with a higher number of papers, while the number of papers in other areas is lower, which may lead to the model’s weaker ability to generalize to certain areas on the sentence ordering task.
 
4.
SIND caption, ROCStory
The SIND caption dataset is sourced from the story texts in the Sequential Image Narrative Dataset (SIND) [76]. The dataset comprises 50,200 stories, with each story composed of 5 sentences and an average word count of approximately 57. Since stories generally exhibit coherence in terms of temporal sequence and causal relationships, they are suitable for sentence ordering algorithm datasets.
The ROCStory dataset [77] consists of common-sense stories and contains 98,162 stories. Each story is composed of 5 sentences, with an average word count of approximately 50. In comparison to the SIND caption dataset, ROCStory lacks additional images as supplementary information, resulting in stories that adhere more closely to logical coherence. Both the SIND caption and ROCStory datasets are randomly divided into training, validation, and testing sets in an 8:1:1 ratio.
 

4 Evaluation Metrics

The ultimate goal of sentence ordering tasks is to form readable and semantically coherent texts, so the ordering results need to be evaluated from multiple perspectives. As the sentence ordering task has developed, researchers have proposed various evaluation metrics to measure sentence ordering results. Researchers can select appropriate evaluation metrics based on the characteristics of their algorithms and datasets to judge the effectiveness of a sentence ordering algorithm. This section provides a detailed analysis and summary of the most commonly used evaluation metrics in LTR and DL-based sentence ordering algorithms.

4.1 Evaluation Metrics of LTR

The evaluation metrics of LTR-based sentence ordering algorithms mainly include MAP, NDCG and Expected Reciprocal Rank (ERR). Among them, MAP focuses on the average ordering quality of the ordering results, NDCG focuses on the relevance and positional relationship of the ordering results, and ERR focuses more on the top-ordered results, and the more top-ordered sentences have a greater impact on the overall evaluation. Their applications are shown in Table 9.

4.1.1 MAP

MAP [72] is computed by first calculating the average precision (AP) for each query and then taking the average of all the AP values across queries. It reflects the relationship between sentence relevance and ranking position, where higher MAP values indicate that relevant sentences are ranked higher. Relevance judgments are typically represented as either 0 (non-relevant) or 1 (relevant). AP is calculated as follows:
$$\begin{aligned} AP={\frac{\sum _{i=1}^n{P_{i}}{\varvec{y}}_{i}/R_{i}}{n}} \end{aligned}$$
(1)
where \(P_i\) represents the relative order of relevant sentences in the list of sentences to be ranked, \(Y_i\) represents the label value (0 or 1) for each sentence, \(R_i\) represents the order of all sentences in the list to be ranked, and n represents the total number of relevant sentences in the list. Finally, the AP values obtained for all queries are averaged to calculate the MAP:
$$\begin{aligned} MAP={\frac{\sum ^{m}_{j=1}{AP_j}}{m}} \end{aligned}$$
(2)
where m represents the total number of queries.

4.1.2 DCG and NDCG

NDCG [72] is a comprehensive evaluation metric that takes into account the relationship between the algorithm’s ranking list and the true sequence. It is one of the most commonly used evaluation metrics in LTR. The basic idea is that the higher the relevance of a sentence to a query, the higher its rank should be in the ranking list, and the higher the rank, the higher its position in the ranking sequence. Therefore, a higher NDCG value indicates a better ranking. NDCG consists of four components: Normalized, Discounted, Cumulative, and Gain.
First, the gain values need to be calculated, where higher relevance generally corresponds to larger gain values. Taking the i document as an example, let \(y_i\) denote the relevance level of the sentence, where higher relevance corresponds to a higher level value. Therefore, the gain value \(G_i\) can be represented as:
$$\begin{aligned} G_i=2^{y_{i}}-1 \end{aligned}$$
(3)
Next, the gain values of all sentences in the query are accumulated to obtain the CG (Cumulative Gain):
$$\begin{aligned} CG=\sum _{i=1}^{n}2^{y_{i}}-1 \end{aligned}$$
(4)
Because CG does not consider the ordering of sentences in the evaluation, its value only indicates the overall quality without judging the effectiveness of the ranking. Therefore, different weight values are assigned based on the position of sentences in the ranking, which forms the foundation of the evaluation metric for NDCG, known as DCG (discounted cumulative gain):
$$\begin{aligned} DCG=\sum _{i=1}^{n}{\frac{2^{y_{i}}-1}{l o g_{2}\left( 1+i\right) }} \end{aligned}$$
(5)
Finally, in order to facilitate the comparison of ranking effectiveness between different queries, normalization is performed. The normalization factor is the most ideal ranking value for each query, referred to as the IDCG (Ideal DCG). Therefore, NDCG is calculated as follows:
$$\begin{aligned} NDCG={\frac{D C G}{I D C G}}={\frac{1}{I D C G}}\sum _{i=1}^{n}{\frac{2^{y_{i}}-1}{I o g_{2}\left( 1+i\right) }} \end{aligned}$$
(6)
However, NDCG also has certain limitations. For example, it does not explicitly penalize irrelevant sentences.

4.1.3 ERR

In order to address the limitation of NDCG in not considering the influence of preceding sentences on the subsequent ones within the same ranking list, ERR [78] introduces the concept of the cascade model, which fully takes into account the positional dependencies among sentences in the same ranking list. If the ranking position of a particular sentence is satisfactory, the ordering process is considered complete. Otherwise, that sentence is skipped, and the process continues to the next one. The probability of stopping at the i position can be expressed as \(P_i\):
$$\begin{aligned} P_{i}=\prod _{j=1}^{i-1}\left( 1-R_{j}\right) R_{i} \end{aligned}$$
(7)
where \(R_j\) represents the relevance level function for sentences, which is defined as: \(R_{j}=R\Bigl (g_{j}\Bigr )=\frac{2^{g_{j}}-1}{2^{g_{m a x}}}, g_{j}\in \left\{ 0,1,...,g_{m a x}\right\} \), where \(g_j\) represents the relevance level of the j sentence, and \(g_{max}\) represents the maximum relevance level among all sentences. Therefore, ERR can be expressed as:
$$\begin{aligned} ERR=\sum _{i=1}^{n}\varphi (i)\prod _{j=1}^{i-1}\left( 1-R_{j}\right) R_{i}=\sum _{i=1}^{n}\prod _{j=1}^{i-1}\left( 1-R_{j}\right) R_{i}\nonumber \\ \end{aligned}$$
(8)
Unlike MAP, ERR and NDCG evaluation metrics divide the relevance of sentences into multiple levels (ranking levels > 2), while MAP only have two levels (relevant and non-relevant). Another advantage of NDCG and ERR are that they consider the positional information of sentences more prominently when calculating the evaluation scores, giving higher weight to sentences in higher positions.

4.2 Evaluation Metrics of DL

The evaluation metrics of DL-based sentence ordering algorithms mainly include PMR, Accuracy and \(\tau \) coefficient. Among them, PMR focuses on the proportion of all sentences in the sentence ordering result that are completely correct, Accuracy focuses on the accuracy of the absolute position of each sentence, and \(\tau \) coefficient the accuracy of the relative position of the sentences in the ordering result. Their applications in sentence ordering algorithms are shown in Table 8.

4.2.1 PMR

PMR [14] represents the probability of perfect match in the order of all ordered documents:
$$\begin{aligned} PMR={\frac{1}{Q}}\sum _{i=1}^{Q}{\mathbb {I}}{\left( {{\hat{o}}^i}=o^i\right) } \end{aligned}$$
(9)
where \(\mathbb {I}(\bullet )\) is an indicator function, \({\hat{o}}^i\) denotes the predicted ranking of the i document, \(o^i\) denotes the correct ranking of the i document, and Q represents all the ordered documents. PMR is a stricter metric and is only counted as correct if the ordering result is exactly the same as the target ordering. Therefore, it has high requirements for the ordering accuracy of the algorithm.

4.2.2 Accuracy

Accuracy considers the rate at which the absolute position of a sentence in ordering is correctly predicted by the algorithm, which can be defined as:
$$\begin{aligned} Accuracy={\frac{N_{correct}}{N_{tatal}}} \end{aligned}$$
(10)
where Accuracy [29] represents the accuracy, which \(N_{correct}\) represents the number of sentences with correctly predicted ranking positions, and \(N_{tatal}\) represents the total number of sentences to be ranked. Compared to PMR, Accuracy is more fault tolerant for ordering results, even if there are some errors in the ordering results, as long as most of the positions are correct, Accuracy can still be higher.

4.2.3 Kendall’s tau

Kendall’s Tau coefficient [79], also known as \(\tau \) coefficient, is primarily used to compare the ranking list generated by an algorithm with the true ranking list. If the two ranking lists are completely identical, the tau value is 1. If the two ranking lists are unrelated, the tau value is 0. If the two ranking lists are inversely related, the tau coefficient is -1, which can be defined as:
$$\begin{aligned} \tau =1-\frac{2S\left( \pi ,\sigma \right) }{N\left( N-1\right) /2} \end{aligned}$$
(11)
where N represents the number of sentences to be ranked, and \(S\left( \pi ,\sigma \right) \) represents the number of discordant pairs between the two ranking lists. The \(\tau \) coefficient captures the overall trend and relevance of the ordering results, not just the accuracy of the absolute position.

5 Future Work

The sentence ordering algorithm has vast application prospects, ranging from early tasks such as multi-document summarization, collaborative filtering, and document retrieval, to a growing number of domains that require the use of ordering algorithms to obtain optimal ranked lists. For example, in machine translation, the ListMLE algorithm [80] can be used to obtain the optimal translation results. The LambdaMART algorithm [81] can be applied to personalized recommendations for search predictions below the search box when a user performs a search. In shopping websites, the RankSVM algorithm [82] can be employed to analyze each user’s search data, calculate personalized ranking lists, and select the top-k items as the final recommended products. In essay generation systems [83, 84], sentences can be ordered to produce a coherent and highly readable essay. In intelligent writing systems, dynamic sentence recommendations for subsequent writing materials can be achieved based on the content input by the user, and so on. While sentence ordering algorithms have significant applications in fields such as information retrieval, machine translation, and automated writing, reflecting on the past and looking toward the future, there are still many issues that merit further research and solutions in the realm of sentence ordering algorithms.
1.
The utilization of deep neural network models has elevated the performance of ordering algorithms to new heights. However, these models also encounter challenges such as large model sizes, increased data processing requirements, and slower convergence speeds. A key research focus for future researchers is to explore methods that can reduce model sizes and accelerate prediction speeds while maintaining accuracy.
 
2.
One of the crucial factors determining the accuracy of the final ordering sequence in sentence ordering algorithms is the precise capture of semantic features and contextual relationships within sentences. It is an ongoing research challenge to effectively capture both the semantic features and contextual relationships of sentences simultaneously in order to enhance algorithm efficiency. Additionally, exploring methods to integrate multiple semantic features of sentences together to improve algorithm accuracy, and effectively utilizing these features to achieve better ordering results, are also areas that require further investigation.
 
3.
Currently, sentence-level ordering algorithms have achieved SOTA performance. It is worth considering the transfer of these algorithms to larger-scale ordering tasks, such as paragraph, chapter, and document ordering. However, research in this area is relatively limited at present. Wan et al. [85] used BiLSTM and self-attention to construct a multi-paragraph ordering model, but the performance still lags behind human performance. Transferring existing sentence ordering algorithms to larger-scale text ordering poses different challenges. For example, BERT-based ordering algorithms are only suitable for sentence and paragraph-level ordering tasks and cannot handle document-level ordering tasks. Therefore, there are still many issues to be addressed in transferring sentence ordering algorithms to larger-scale document ordering tasks. Research in this area is a long and challenging journey, but it holds significant research value.
 
4.
Existing public datasets are unable to meet all research and application requirements, which necessitates researchers to construct domain-specific datasets that align with their research needs. High-quality datasets are crucial for sentence ordering algorithms, and currently, domain-specific datasets are often constructed through manual annotation. However, this approach comes with high costs and subjective judgments, posing challenges in obtaining reliable labels. Improving the above-mentioned issues and obtaining trustworthy labels are areas that researchers need to focus on.
 
5.
Currently, sentence ordering algorithms heavily rely on training with large-scale datasets to achieve good performance. However, for certain specific research domains, acquiring a sufficient number of training samples can be challenging. Therefore, future research in sentence ordering algorithms needs to explore methods to achieve good performance on small-sample datasets and improve the utilization efficiency of deep learning models with limited data.
 
The resolution of the aforementioned issues holds significant importance for the further advancement and practical application of sentence ordering algorithms.

6 Conclusion

The sentence ordering task, as an important research direction in the field of NLP, has been widely applied in various research domains, bringing great convenience to our daily lives. This article mainly analyzes and summarizes the different input data approaches and implementation techniques of sentence ordering algorithms. Through the analysis and comparison of these algorithms, it is found that sentence ordering algorithms based on deep learning can reduce the complexity of manual feature extraction. By modeling sentences using multi-layer neural networks, they are able to better capture the semantic information within sentences, thus learning more abstract and complex feature representations. Currently, this approach is also the most widely used algorithm. In future, there will be increasing demands and higher performance requirements for sentence ordering algorithms. Researchers can delve into the future prospects proposed in this article to further promote the development of sentence ordering algorithms, as well as related algorithms for larger-scale ordering, such as paragraphs and chapters.

Declarations

Conflict of interest

The authors have no conflict of interest to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Okazaki, N., Matsuo, Y., Ishizuka, M.: Improving chronological sentence ordering by precedence relation. In: COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pp. 750–756 (2004) Okazaki, N., Matsuo, Y., Ishizuka, M.: Improving chronological sentence ordering by precedence relation. In: COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pp. 750–756 (2004)
2.
Zurück zum Zitat Paice, C.D.: Constructing literature abstracts by computer: techniques and prospects. Inf. Process. Manag. 26(1), 171–186 (1990)CrossRef Paice, C.D.: Constructing literature abstracts by computer: techniques and prospects. Inf. Process. Manag. 26(1), 171–186 (1990)CrossRef
4.
Zurück zum Zitat Barzilay, R., Elhadad, N., McKeown, K.: Inferring strategies for sentence ordering in multidocument news summarization. J. Artif. Intell. Res. 17, 35–55 (2002)CrossRef Barzilay, R., Elhadad, N., McKeown, K.: Inferring strategies for sentence ordering in multidocument news summarization. J. Artif. Intell. Res. 17, 35–55 (2002)CrossRef
8.
Zurück zum Zitat Barzilay, R., Lapata, M.: Modeling local coherence: An entity-based approach. Comput. Linguist. 34(1), 1–34 (2008)CrossRef Barzilay, R., Lapata, M.: Modeling local coherence: An entity-based approach. Comput. Linguist. 34(1), 1–34 (2008)CrossRef
10.
Zurück zum Zitat Liu, T.-Y., et al.: Learning to rank for information retrieval. Found Trends® Inf Retr. 3(3), 225–331 (2009)CrossRef Liu, T.-Y., et al.: Learning to rank for information retrieval. Found Trends® Inf Retr. 3(3), 225–331 (2009)CrossRef
11.
Zurück zum Zitat Rigutini, L., Papini, T., Maggini, M., Scarselli, F.: Sortnet: Learning to rank by a neural-based sorting algorithm. In: In Proceedings of the SIGIR 2008 Workshop on Learning to Rank for Information Retrieval (LR4IR), vol. 42, pp. 76–79 (2008) Rigutini, L., Papini, T., Maggini, M., Scarselli, F.: Sortnet: Learning to rank by a neural-based sorting algorithm. In: In Proceedings of the SIGIR 2008 Workshop on Learning to Rank for Information Retrieval (LR4IR), vol. 42, pp. 76–79 (2008)
12.
Zurück zum Zitat Freund, Y., Iyer, R., Schapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4(Nov), 933–969 (2003)MathSciNet Freund, Y., Iyer, R., Schapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4(Nov), 933–969 (2003)MathSciNet
13.
Zurück zum Zitat Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 89–96 (2005) Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 89–96 (2005)
15.
Zurück zum Zitat Logeswaran, L., Lee, H., Radev, D.: Sentence ordering and coherence modeling using recurrent neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018) Logeswaran, L., Lee, H., Radev, D.: Sentence ordering and coherence modeling using recurrent neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
16.
Zurück zum Zitat Cui, B., Li, Y., Chen, M., Zhang, Z.: Deep attentive sentence ordering network. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4340–4349 (2018) Cui, B., Li, Y., Chen, M., Zhang, Z.: Deep attentive sentence ordering network. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4340–4349 (2018)
17.
Zurück zum Zitat Cui, B., Li, Y., Zhang, Z.: Bert-enhanced relational sentence ordering network. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6310–6320 (2020) Cui, B., Li, Y., Zhang, Z.: Bert-enhanced relational sentence ordering network. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6310–6320 (2020)
18.
Zurück zum Zitat Xiong, L., Chen, X., Zhong, M., Huang, X.: Survey on pairwise of the learning to rank. Sci. Technol. Eng. 17(21), 184–190 (2017) Xiong, L., Chen, X., Zhong, M., Huang, X.: Survey on pairwise of the learning to rank. Sci. Technol. Eng. 17(21), 184–190 (2017)
19.
Zurück zum Zitat Li, P., Wu, Q., Burges, C.: Mcrank: Learning to rank using multiple classification and gradient boosting. Advances in Neural Information Processing Systems 20 (2007) Li, P., Wu, Q., Burges, C.: Mcrank: Learning to rank using multiple classification and gradient boosting. Advances in Neural Information Processing Systems 20 (2007)
20.
Zurück zum Zitat Cossock, D., Zhang, T.: Subset ranking using regression. In: Learning Theory: 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006. Proceedings 19, pp. 605–619 (2006). Springer Cossock, D., Zhang, T.: Subset ranking using regression. In: Learning Theory: 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006. Proceedings 19, pp. 605–619 (2006). Springer
21.
Zurück zum Zitat Crammer, K., Singer, Y.: Pranking with ranking. Advances in Neural Information Processing Systems 14 (2001) Crammer, K., Singer, Y.: Pranking with ranking. Advances in Neural Information Processing Systems 14 (2001)
22.
Zurück zum Zitat Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 133–142 (2002) Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 133–142 (2002)
23.
Zurück zum Zitat Chapelle, O., Keerthi, S.S.: Efficient algorithms for ranking with SVMs. Inf. Retr. 13, 201–215 (2010)CrossRef Chapelle, O., Keerthi, S.S.: Efficient algorithms for ranking with SVMs. Inf. Retr. 13, 201–215 (2010)CrossRef
24.
Zurück zum Zitat Suzuki, S.D., Ohue, M., Akiyama, Y.: PKRank: a novel learning-to-rank method for ligand-based virtual screening using pairwise kernel and RankSVM. Artif. Life Robot. 23, 205–212 (2018)CrossRef Suzuki, S.D., Ohue, M., Akiyama, Y.: PKRank: a novel learning-to-rank method for ligand-based virtual screening using pairwise kernel and RankSVM. Artif. Life Robot. 23, 205–212 (2018)CrossRef
25.
Zurück zum Zitat Burges, C., Ragno, R., Le, Q.: Learning to rank with nonsmooth cost functions. Advances in neural information processing systems 19 (2006) Burges, C., Ragno, R., Le, Q.: Learning to rank with nonsmooth cost functions. Advances in neural information processing systems 19 (2006)
26.
Zurück zum Zitat Ahmad, D.T.: Compositional feature subset based ranking system ((CFBRS) for learning to rank with user feedback. Int. J. Appl. Eng. Res. 13(1), 277–290 (2018) Ahmad, D.T.: Compositional feature subset based ranking system ((CFBRS) for learning to rank with user feedback. Int. J. Appl. Eng. Res. 13(1), 277–290 (2018)
27.
Zurück zum Zitat Agrawal, H., Chandrasekaran, A., Batra, D., Parikh, D., Bansal, M.: Sort story: Sorting jumbled images and captions into stories. ArXiv abs/1606.07493 (2016) Agrawal, H., Chandrasekaran, A., Batra, D., Parikh, D., Bansal, M.: Sort story: Sorting jumbled images and captions into stories. ArXiv abs/1606.07493 (2016)
28.
Zurück zum Zitat Yin, Y., Meng, F., Su, J., Ge, Y., Song, L., Zhou, J., Luo, J.: Enhancing pointer network for sentence ordering with pairwise ordering predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9482–9489 (2020) Yin, Y., Meng, F., Su, J., Ge, Y., Song, L., Zhou, J., Luo, J.: Enhancing pointer network for sentence ordering with pairwise ordering predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9482–9489 (2020)
29.
Zurück zum Zitat Prabhumoye, S., Salakhutdinov, R., Black, A.W.: Topological sort for sentence ordering. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2783–2792 (2020) Prabhumoye, S., Salakhutdinov, R., Black, A.W.: Topological sort for sentence ordering. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2783–2792 (2020)
30.
Zurück zum Zitat Li, J., Jurafsky, D.: Neural net models for open-domain discourse coherence. ArXiv abs/1606.01545 (2016) Li, J., Jurafsky, D.: Neural net models for open-domain discourse coherence. ArXiv abs/1606.01545 (2016)
31.
Zurück zum Zitat Cao, Z., Qin, T., Liu, T.-Y., Tsai, M.-F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th International Conference on Machine Learning, pp. 129–136 (2007) Cao, Z., Qin, T., Liu, T.-Y., Tsai, M.-F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th International Conference on Machine Learning, pp. 129–136 (2007)
32.
Zurück zum Zitat Xia, F., Liu, T.-Y., Wang, J., Zhang, W., Li, H.: Listwise approach to learning to rank: theory and algorithm. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1192–1199 (2008) Xia, F., Liu, T.-Y., Wang, J., Zhang, W., Li, H.: Listwise approach to learning to rank: theory and algorithm. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1192–1199 (2008)
33.
Zurück zum Zitat Huang, J.C., Frey, B.J.: Structured ranking learning using cumulative distribution networks. In: Proceedings of the 21st International Conference on Neural Information Processing Systems, pp. 697–704 (2008) Huang, J.C., Frey, B.J.: Structured ranking learning using cumulative distribution networks. In: Proceedings of the 21st International Conference on Neural Information Processing Systems, pp. 697–704 (2008)
34.
Zurück zum Zitat Volkovs, M.N., Zemel, R.S.: Boltzrank: learning to maximize expected ranking gain. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1089–1096 (2009) Volkovs, M.N., Zemel, R.S.: Boltzrank: learning to maximize expected ranking gain. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1089–1096 (2009)
35.
Zurück zum Zitat Kumar, P., Brahma, D., Karnick, H., Rai, P.: Deep attentive ranking networks for learning to order sentences. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 8115–8122 (2020) Kumar, P., Brahma, D., Karnick, H., Rai, P.: Deep attentive ranking networks for learning to order sentences. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 8115–8122 (2020)
36.
Zurück zum Zitat Yue, Y., Finley, T., Radlinski, F., Joachims, T.: A support vector method for optimizing average precision. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 271–278 (2007) Yue, Y., Finley, T., Radlinski, F., Joachims, T.: A support vector method for optimizing average precision. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 271–278 (2007)
37.
Zurück zum Zitat Xu, J., Li, H.: Adarank: a boosting algorithm for information retrieval. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 391–398 (2007) Xu, J., Li, H.: Adarank: a boosting algorithm for information retrieval. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 391–398 (2007)
38.
Zurück zum Zitat Xu, J., Liu, T.-Y., Lu, M., Li, H., Ma, W.-Y.: Directly optimizing evaluation measures in learning to rank. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 107–114 (2008) Xu, J., Liu, T.-Y., Lu, M., Li, H., Ma, W.-Y.: Directly optimizing evaluation measures in learning to rank. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 107–114 (2008)
39.
Zurück zum Zitat Taylor, M., Guiver, J., Robertson, S., Minka, T.: Softrank: optimizing non-smooth rank metrics. In: Proceedings of the 2008 International Conference on Web Search and Data Mining, pp. 77–86 (2008) Taylor, M., Guiver, J., Robertson, S., Minka, T.: Softrank: optimizing non-smooth rank metrics. In: Proceedings of the 2008 International Conference on Web Search and Data Mining, pp. 77–86 (2008)
40.
Zurück zum Zitat Nallapati, R.: Discriminative models for information retrieval. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 64–71 (2004) Nallapati, R.: Discriminative models for information retrieval. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 64–71 (2004)
41.
Zurück zum Zitat Tsai, M.-F., Liu, T.-Y., Qin, T., Chen, H.-H., Ma, W.-Y.: Frank: a ranking method with fidelity loss. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 383–390 (2007) Tsai, M.-F., Liu, T.-Y., Qin, T., Chen, H.-H., Ma, W.-Y.: Frank: a ranking method with fidelity loss. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 383–390 (2007)
42.
Zurück zum Zitat Rigutini, L., Papini, T., Maggini, M., Scarselli, F.: Sortnet: Learning to rank by a neural preference function. IEEE Trans. Neural Netw. 22(9), 1368–1380 (2011)CrossRef Rigutini, L., Papini, T., Maggini, M., Scarselli, F.: Sortnet: Learning to rank by a neural preference function. IEEE Trans. Neural Netw. 22(9), 1368–1380 (2011)CrossRef
43.
Zurück zum Zitat Connamacher, H., Pancha, N., Liu, R., Ray, S.: Rankboost+: an improvement to rankboost. Mach. Learn. 109(1), 51–78 (2020)MathSciNetCrossRef Connamacher, H., Pancha, N., Liu, R., Ray, S.: Rankboost+: an improvement to rankboost. Mach. Learn. 109(1), 51–78 (2020)MathSciNetCrossRef
44.
Zurück zum Zitat Zheng, Z., Chen, K., Sun, G., Zha, H.: A regression framework for learning ranking functions using relative relevance judgments. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 287–294 (2007) Zheng, Z., Chen, K., Sun, G., Zha, H.: A regression framework for learning ranking functions using relative relevance judgments. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 287–294 (2007)
45.
Zurück zum Zitat Mohan, A., Chen, Z., Weinberger, K.: Web-search ranking with initialized gradient boosted regression trees. In: Proceedings of the Learning to Rank Challenge, pp. 77–89 (2011). PMLR Mohan, A., Chen, Z., Weinberger, K.: Web-search ranking with initialized gradient boosted regression trees. In: Proceedings of the Learning to Rank Challenge, pp. 77–89 (2011). PMLR
46.
Zurück zum Zitat Wu, Q., Burges, C.J., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Inf. Retr. 13, 254–270 (2010)CrossRef Wu, Q., Burges, C.J., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Inf. Retr. 13, 254–270 (2010)CrossRef
47.
Zurück zum Zitat Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012) Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:​1205.​2618 (2012)
48.
Zurück zum Zitat Lerche, L., Jannach, D.: Using graded implicit feedback for Bayesian personalized ranking. In: Proceedings of the 8th ACM Conference on Recommender Systems, pp. 353–356 (2014) Lerche, L., Jannach, D.: Using graded implicit feedback for Bayesian personalized ranking. In: Proceedings of the 8th ACM Conference on Recommender Systems, pp. 353–356 (2014)
49.
Zurück zum Zitat Qiu, H., Liu, Y., Guo, G., Sun, Z., Zhang, J., Nguyen, H.T.: Bprh: Bayesian personalized ranking for heterogeneous implicit feedback. Inf. Sci. 453, 80–98 (2018)CrossRef Qiu, H., Liu, Y., Guo, G., Sun, Z., Zhang, J., Nguyen, H.T.: Bprh: Bayesian personalized ranking for heterogeneous implicit feedback. Inf. Sci. 453, 80–98 (2018)CrossRef
50.
Zurück zum Zitat Wang, J., Han, P.: Adversarial training-based mean Bayesian personalized ranking for recommender system. IEEE Access 8, 7958–7968 (2019)CrossRef Wang, J., Han, P.: Adversarial training-based mean Bayesian personalized ranking for recommender system. IEEE Access 8, 7958–7968 (2019)CrossRef
51.
Zurück zum Zitat Kenton, J.D.M.-W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, vol. 1, p. 2 (2019) Kenton, J.D.M.-W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, vol. 1, p. 2 (2019)
52.
Zurück zum Zitat Gong, J., Chen, X., Qiu, X., Huang, X.: End-to-end neural sentence ordering using pointer network. arXiv preprint arXiv:1611.04953 (2016) Gong, J., Chen, X., Qiu, X., Huang, X.: End-to-end neural sentence ordering using pointer network. arXiv preprint arXiv:​1611.​04953 (2016)
53.
Zurück zum Zitat Li, J., Jurafsky, D.: Neural net models of open-domain discourse coherence. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 198–209 (2017) Li, J., Jurafsky, D.: Neural net models of open-domain discourse coherence. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 198–209 (2017)
54.
Zurück zum Zitat Narayan, S., Cohen, S.B., Lapata, M.: Ranking sentences for extractive summarization with reinforcement learning. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1747–1759 (2018) Narayan, S., Cohen, S.B., Lapata, M.: Ranking sentences for extractive summarization with reinforcement learning. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1747–1759 (2018)
55.
Zurück zum Zitat Wang, T., Wan, X.: Hierarchical attention networks for sentence ordering. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 7184–7191 (2019) Wang, T., Wan, X.: Hierarchical attention networks for sentence ordering. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 7184–7191 (2019)
56.
Zurück zum Zitat Yin, Y., Song, L., Su, J., Zeng, J., Zhou, C., Luo, J.: Graph-based neural sentence ordering. arXiv preprint arXiv:1912.07225 (2019) Yin, Y., Song, L., Su, J., Zeng, J., Zhou, C., Luo, J.: Graph-based neural sentence ordering. arXiv preprint arXiv:​1912.​07225 (2019)
57.
Zurück zum Zitat Lai, S., Wang, A., Meng, F., Zhou, J., Ge, Y., Zeng, J., Yao, J., Huang, D., Su, J.: Improving graph-based sentence ordering with iteratively predicted pairwise orderings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 2407–2417 (2021) Lai, S., Wang, A., Meng, F., Zhou, J., Ge, Y., Zeng, J., Yao, J., Huang, D., Su, J.: Improving graph-based sentence ordering with iteratively predicted pairwise orderings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 2407–2417 (2021)
58.
Zurück zum Zitat Oh, B., Seo, S., Shin, C., Jo, E., Lee, K.-H.: Topic-guided coherence modeling for sentence ordering by preserving global and local information. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2273–2283 (2019) Oh, B., Seo, S., Shin, C., Jo, E., Lee, K.-H.: Topic-guided coherence modeling for sentence ordering by preserving global and local information. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2273–2283 (2019)
59.
Zurück zum Zitat Li, Y., Cui, B., Zhang, Z.: Efficient relational sentence ordering network. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6169–6183 (2021)CrossRef Li, Y., Cui, B., Zhang, Z.: Efficient relational sentence ordering network. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6169–6183 (2021)CrossRef
60.
Zurück zum Zitat Manku, R.R., Paul, A.J.: Local and global context-based pairwise models for sentence ordering. Knowl.-Based Syst. 243, 108453 (2022)CrossRef Manku, R.R., Paul, A.J.: Local and global context-based pairwise models for sentence ordering. Knowl.-Based Syst. 243, 108453 (2022)CrossRef
61.
Zurück zum Zitat Golestani, M., Razavi, S.Z., Faili, H.: A new sentence ordering method using BERT pretrained model. arXiv preprint arXiv:2108.11994 (2021) Golestani, M., Razavi, S.Z., Faili, H.: A new sentence ordering method using BERT pretrained model. arXiv preprint arXiv:​2108.​11994 (2021)
62.
Zurück zum Zitat Zhu, Y., Nie, J.-Y., Zhou, K., Liu, S., Ling, Y., Du, P.: Bert4so: Neural sentence ordering by fine-tuning BERT. arXiv preprint arXiv:2103.13584 (2021) Zhu, Y., Nie, J.-Y., Zhou, K., Liu, S., Ling, Y., Du, P.: Bert4so: Neural sentence ordering by fine-tuning BERT. arXiv preprint arXiv:​2103.​13584 (2021)
63.
Zurück zum Zitat Zhu, Y., Zhou, K., Nie, J.-Y., Liu, S., Dou, Z.: Neural sentence ordering based on constraint graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14656–14664 (2021) Zhu, Y., Zhou, K., Nie, J.-Y., Liu, S., Dou, Z.: Neural sentence ordering based on constraint graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14656–14664 (2021)
64.
Zurück zum Zitat Golestani, M., Borhanifard, Z., Tahmasebian, F., Faili, H.: Pruned graph neural network for short story ordering. In: International Conference of the Italian Association for Artificial Intelligence, pp. 213–227 (2021). Springer Golestani, M., Borhanifard, Z., Tahmasebian, F., Faili, H.: Pruned graph neural network for short story ordering. In: International Conference of the Italian Association for Artificial Intelligence, pp. 213–227 (2021). Springer
65.
Zurück zum Zitat Bai, G., He, S., Liu, K., Zhao, J.: Bidirectional sentence ordering with interactive decoding. ACM Trans. Asian Low-Resource Lang. Inf. Process. 22(2), 1–15 (2023)CrossRef Bai, G., He, S., Liu, K., Zhao, J.: Bidirectional sentence ordering with interactive decoding. ACM Trans. Asian Low-Resource Lang. Inf. Process. 22(2), 1–15 (2023)CrossRef
66.
Zurück zum Zitat Chowdhury, S.B.R., Brahman, F., Chaturvedi, S.: Is everything in order? A simple way to order sentences. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10769–10779 (2021) Chowdhury, S.B.R., Brahman, F., Chaturvedi, S.: Is everything in order? A simple way to order sentences. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10769–10779 (2021)
67.
Zurück zum Zitat Gulçehre, Ç., Ahn, S., Nallapati, R., Zhou, B., Bengio, Y.: Pointing the unknown words. In: Annual Meeting of the Association for Computational Linguistics (2016). Association for Computational Linguistics (ACL) Gulçehre, Ç., Ahn, S., Nallapati, R., Zhou, B., Bengio, Y.: Pointing the unknown words. In: Annual Meeting of the Association for Computational Linguistics (2016). Association for Computational Linguistics (ACL)
68.
Zurück zum Zitat Zhou, J., Agichtein, E.: Rlirank: learning to rank with reinforcement learning for dynamic search. In: Proceedings of The Web Conference 2020, pp. 2842–2848 (2020) Zhou, J., Agichtein, E.: Rlirank: learning to rank with reinforcement learning for dynamic search. In: Proceedings of The Web Conference 2020, pp. 2842–2848 (2020)
69.
Zurück zum Zitat Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:​1909.​11942 (2019)
70.
Zurück zum Zitat Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:​1907.​11692 (2019)
71.
Zurück zum Zitat Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880 (2020) Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880 (2020)
72.
Zurück zum Zitat Li, H.: Learning to Rank for Information Retrieval and Natural Language Processing. Springer, Berlin (2022) Li, H.: Learning to Rank for Information Retrieval and Natural Language Processing. Springer, Berlin (2022)
73.
Zurück zum Zitat Qin, T., Liu, T.-Y., Xu, J., Li, H.: Letor: a benchmark collection for research on learning to rank for information retrieval. Inf. Retr. 13, 346–374 (2010)CrossRef Qin, T., Liu, T.-Y., Xu, J., Li, H.: Letor: a benchmark collection for research on learning to rank for information retrieval. Inf. Retr. 13, 346–374 (2010)CrossRef
75.
Zurück zum Zitat Chapelle, O., Chang, Y.: Yahoo! learning to rank challenge overview. In: Proceedings of the Learning to Rank Challenge, pp. 1–24 (2011). PMLR Chapelle, O., Chang, Y.: Yahoo! learning to rank challenge overview. In: Proceedings of the Learning to Rank Challenge, pp. 1–24 (2011). PMLR
76.
Zurück zum Zitat Huang, T.-H., Ferraro, F., Mostafazadeh, N., Misra, I., Agrawal, A., Devlin, J., Girshick, R., He, X., Kohli, P., Batra, D., et al.: Visual storytelling. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1233–1239 (2016) Huang, T.-H., Ferraro, F., Mostafazadeh, N., Misra, I., Agrawal, A., Devlin, J., Girshick, R., He, X., Kohli, P., Batra, D., et al.: Visual storytelling. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1233–1239 (2016)
77.
Zurück zum Zitat Mostafazadeh, N., Chambers, N., He, X., Parikh, D., Batra, D., Vanderwende, L., Kohli, P., Allen, J.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016) Mostafazadeh, N., Chambers, N., He, X., Parikh, D., Batra, D., Vanderwende, L., Kohli, P., Allen, J.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016)
78.
Zurück zum Zitat Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 621–630 (2009) Chapelle, O., Metlzer, D., Zhang, Y., Grinspan, P.: Expected reciprocal rank for graded relevance. In: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 621–630 (2009)
79.
Zurück zum Zitat Lapata, M.: Automatic evaluation of information ordering: Kendall’s tau. Comput. Linguist. 32(4), 471–484 (2006)CrossRef Lapata, M.: Automatic evaluation of information ordering: Kendall’s tau. Comput. Linguist. 32(4), 471–484 (2006)CrossRef
80.
Zurück zum Zitat Li, M., Wang, M.: Optimizing automatic evaluation of machine translation with the ListMLE approach. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 18(1), 1–18 (2018)MathSciNet Li, M., Wang, M.: Optimizing automatic evaluation of machine translation with the ListMLE approach. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 18(1), 1–18 (2018)MathSciNet
81.
Zurück zum Zitat Shokouhi, M.: Learning to personalize query auto-completion. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 103–112 (2013) Shokouhi, M.: Learning to personalize query auto-completion. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 103–112 (2013)
82.
Zurück zum Zitat Das, M., De Francisci Morales, G., Gionis, A., Weber, I.: Learning to question: leveraging user preferences for shopping advice. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 203–211 (2013) Das, M., De Francisci Morales, G., Gionis, A., Weber, I.: Learning to question: leveraging user preferences for shopping advice. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 203–211 (2013)
83.
Zurück zum Zitat Feng, X., Liu, M., Liu, J., Qin, B., Sun, Y., Liu, T.: Topic-to-essay generation with neural networks. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 4078–4084 (2018) Feng, X., Liu, M., Liu, J., Qin, B., Sun, Y., Liu, T.: Topic-to-essay generation with neural networks. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 4078–4084 (2018)
84.
Zurück zum Zitat Feng, X., Zhang, L., Feng, Z., Wu, J., Sun, C., Qin, B.: Research on sentence ordering for essay generation via argumentation identification. J. Chin. Inf. Process. 36(4), 156–165 (2022) Feng, X., Zhang, L., Feng, Z., Wu, J., Sun, C., Qin, B.: Research on sentence ordering for essay generation via argumentation identification. J. Chin. Inf. Process. 36(4), 156–165 (2022)
85.
Zurück zum Zitat Wan, J., Guo, Y.: Machine reading comprehension based on multi-passage ranking. J. Beijing Univ. Chem. Technol. 46(3), 93 (2019) Wan, J., Guo, Y.: Machine reading comprehension based on multi-passage ranking. J. Beijing Univ. Chem. Technol. 46(3), 93 (2019)
Metadaten
Titel
An overview of sentence ordering task
verfasst von
Yunmei Shi
Haiying Zhang
Ning Li
Teng Yang
Publikationsdatum
25.04.2024
Verlag
Springer International Publishing
Erschienen in
International Journal of Data Science and Analytics
Print ISSN: 2364-415X
Elektronische ISSN: 2364-4168
DOI
https://doi.org/10.1007/s41060-024-00550-9

Premium Partner