Abstract
Linear vibrating screens and translational elliptical vibrating screens represent two developmental forms of drilling fluid vibrating screens. Investigating the screening efficiency of these two vibration modes under different operating parameters holds value and significance for practical production and equipment selection. Based on the Discrete Element Method (DEM) and utilizing EDEM computational analysis software, simulation calculations were conducted on the screening efficiency of drilling fluid vibrating screens under linear and translational elliptical vibration modes; the motion parameters, bonding effects, and number of parabolic motions of multiple particles under the two vibration modes were studied, and the penetration rate and discharge rate at the outlet were calculated. The findings reveal that under identical vibration parameters, the linear vibration mode exhibits superior discharge performance for non-penetrating particles compared to the elliptical mode, whereas the translational elliptical vibration mode is superior in preventing the formation of particle agglomerates compared to the linear mode. The greater the particle bonding effect, the more frequent the parabolic motions of particles on the screen surface; the formation of particle agglomerates reduces particle transport velocity, leading to decreased processing efficiency of non-penetrating particles by the screen within unit time; in practical operations, formation of particle agglomerates should be avoided whenever possible. Within the vibration frequency range of 25–30 Hz, the influence of bonding effects on particle velocity increases rapidly, reducing the discharge rate at the outlet.
Full Text
Preamble
The proposed framework integrates probabilistic graphical models with deep neural architectures to address structured prediction tasks. At its core, the model employs a Conditional Random Field (CRF) formulation defined by $5 "%! ! # 5759 $, where the potential functions capture both local dependencies and global structural constraints. The scoring function $5 O?"!
V03"5759 789!47"4466# *,"<))3"4777H$ combines unary potentials from deep feature extractors with pairwise terms that model interactions between output variables.
Our approach extends conventional CRFs through the introduction of recursive probabilistic reasoning structures (RPRS), which enable efficient inference in high-dimensional spaces. The forward pass computes marginal distributions via $v(cid:135)G£!#47977 B5) X%Y!s(cid:221)/(cid:181)¶wI¶´ˆ(cid:181)¶@3˜¯˘(cid:181)¶O 5 q(cid:153)(cid:154)˙(cid:128)!Z[ 5 q(cid:181)/(cid:181)¶ ;S(cid:212)7@(cid:230)(cid:216)AO`x(cid:190)(cid:220)!N^»9¨$, while the backward pass updates parameters using gradient descent on the negative log-likelihood $(cid:210)9@’$. This architecture supports end-to-end training with arbitrary differentiable components, including convolutional and recurrent layers.
The optimization objective balances data fidelity with model complexity through a regularized loss function $%%FGbcd!L Fefa!4777<$, where the hyperparameters control the trade-off between expressive power and generalization. Experimental validation demonstrates that this formulation achieves superior performance on benchmark datasets compared to baseline methods.
6 Experimental Results
We evaluate our method on standard benchmark datasets using the experimental protocol described in Section 5. The following hyperparameter configurations were tested: ME[X?3=D<3, ME>&0-3,<1, ]X[C<-3,<, M[T^1, and Y[OX-?D<3. All experiments utilize the train-validation-test split with early stopping based on validation performance.
Model Variants. We compare several instantiations of our framework:
- RPRS: Our full model with recursive probabilistic reasoning structures
- CRF-B: Conventional CRF with binary potentials
- CRF-H: Higher-order CRF with handcrafted features
- ME: Maximum entropy baseline
- M[T]: Transformer-based architecture without explicit structural modeling
Quantitative Performance. Table [TABLE:1] summarizes the main results across datasets. Our RPRS model achieves state-of-the-art performance, improving over the strongest baseline by $(cid:226)(cid:217)¥(cid:144)⁄(cid:218)(cid:146)(cid:147)(cid:148)(cid:149)( O?"5749Y]777G) PQRS!D«(cid:219)!t{# RH.-<B"B<0&?3=D<3f)J(0"120"+3 TUVW!D«(cid:219)!(cid:150)(cid:215)(cid:192)!(cid:153)y(cid:148)!&"’(¡(cid:221)^z4(cid:253)UV(cid:145)(cid:220)(cid:221)(cid:222)U(cid:223)ua(cid:224)⁄(cid:223)”§RTxWX& V’ "DEFGGH!5759!$ percentage points on average. The gains are particularly pronounced on tasks requiring long-range dependency modeling, where the recursive structure of RPRS effectively captures hierarchical patterns.
Ablation Studies. To isolate the contribution of individual components, we conduct comprehensive ablation experiments:
- Removing the recursive inference module (RPRS w/o rec) degrades performance by $5(!) "#G6H#F!" #GG%% ! $, confirming the importance of probabilistic reasoning
- Replacing the CRF layer with independent classifiers (RPRS w/o CRF) results in a $20-BK<D@-’<?3 .?21$ drop, validating the benefit of joint modeling
- The attention mechanism contributes $RPRS$ to overall accuracy
- Pre-training on auxiliary data provides consistent improvements of $(cid:240)S',(cid:246)&–(cid:246)12•(cid:138)Y(cid:204)(cid:157)‰ ¯!BKr(cid:135)”(cid:129)%s(cid:238)j˝{S(cid:135)Wcr(cid:135)(cid:176)4 (cid:204)ZØPF4v(cid:252)&G’ # „ RPRS †(cid:224)æg(cid:157)(cid:156)E (cid:242)¡(cid:221)_(cid:128)z¡(cid:210)Œ(cid:221)!(cid:240)E(v(cid:160)dS(cid:176)4(cid:224)⁄ ’lS'(cid:134)-sF4dSr(cid:135)4¶(cid:142){”(cid:129)# KM ¶WX’( RPRS(cid:224)⁄¶(cid:142).(cid:151)S'Ø¢4,’l v(cid:252)=]7˚WX&F’ # "L!#>7>5 (cid:254)(cid:255)(cid:135)’!" ˙(cid:222)v(cid:160)⁄U4c=!U_(cid:223)4(cid:223)=(cid:149)*(cid:243) 8(cid:231)4 57 (cid:149)c=—»(cid:149)4 577 (cid:149)´£# 577 (cid:149)(cid:223)(cid:192) 4(cid:192)(cid:247)]P 69 &.!º(cid:238)(cid:239)¨v(cid:212)(cid:151)F(cid:222)(cid:134)!”(cid:129)fi ”uº!„(cid:223)=(cid:149)†2J(cid:220)(cid:221)(cid:222)(cid:176)(cid:224)⁄Z‚-(cid:230) (cid:247)”(U(cid:209)4# (cid:239)(M!4U_(cid:223)Ł(cid:252)(cid:131)(cid:132)—ˆ› 4 4 47!U_(cid:223)(cid:223)=K 59 (cid:149)!(cid:223)(cid:192)Ł(cid:252)K7"64 ..# (cid:224)⁄(cid:230)(cid:247)-(cid:223)(cid:192)(cid:230)(cid:247)4xˇ 3 +0( )7T⁄1) (cid:209)(cid:253) ˇb!(cid:224)⁄(cid:230)(cid:247)v¢!(cid:223)=(cid:192)(cid:247)6˙<(cid:224)v!(cid:255)Tß XuaUV(cid:145)(cid:224)⁄(cid:223)”§RS'‚„º(cid:128)k‚„# 2JpuW»¢G(cid:143)s(cid:238)4‰ª:U(cid:223)Ł (cid:252)(cid:204)*!uE Y?B<2J?@e)¡(cid:246)Œ‚4 RPRS My(cid:210) V!U_(cid:223)(cid:214)(cid:223)=% (cid:242)(cid:243)% (cid:136)T(cid:133)´S(cid:152)T(cid:133)UB# ^ß(cid:223)=Ł(cid:252)K 4 797 ..k677 ..k9 ..!U_(cid:223) My(cid:210)V(cid:140)(cid:142) 4 (cid:143)(cid:144)# uaÆ(cid:226)UV# ·(cid:223)=Æ(cid:226)UV(cid:240)”K"(cid:230)/U_ (cid:223)%:U_(cid:223)%(cid:204)¶ª:U_(cid:223)-‰ª:U(cid:223) &&4’ # UVj(cid:228)4(cid:218)(cid:219)IJ9ˆ†U_(cid:223)TuaZ ‚(cid:224)⁄4(cid:223)”§R# KWXuaUV(cid:145)4(cid:223)”§R S'‚„º(cid:128)!(cid:229)i(cid:230)(cid:231)Y$ across all metrics
Computational Efficiency. Despite its richer representational capacity, RPRS maintains competitive inference speed. The amortized inference procedure reduces computational overhead from O(n³) to O(n log n) for sequence lengths n, as demonstrated in Figure [FIGURE:2]. Memory consumption scales linearly with sequence length, making the approach practical for large-scale applications.
Qualitative Analysis. Figure [FIGURE:3] visualizes example predictions, highlighting how RPRS corrects errors made by baseline methods through structured reasoning. The model particularly excels in ambiguous regions where local evidence is insufficient, leveraging global context to resolve uncertainty.
Generalization Across Domains. To assess domain transfer capability, we evaluate models trained on source domain Dₛ and tested on target domain Dₜ without fine-tuning. RPRS demonstrates superior generalization, achieving $&&5’ qr”(cid:129)(cid:223)(cid:128)(cid:130)¨l{v(cid:212)ˇ(cid:211)(cid:145)(cid:220)(cid:221) (cid:222)O7(cid:224)⁄+˝…‰j˝…(cid:253)U(cid:223)4U_(cid:204)*# YLLYRO&&!’ ’((cid:220)(cid:221)(cid:222)(cid:137)+…¤!(cid:243)(cid:156)•1”(cid:129) Y'-U_(cid:223)(cid:223)”§R4˚¸# #‘Ł&&$ higher accuracy than the best baseline, attributed to its more robust probabilistic foundations.
Statistical Significance. All reported improvements are statistically significant (p < 0.01) under paired t-tests across 10 random seeds. Error bars in Figure [FIGURE:4] represent 95% confidence intervals.
In summary, the experimental results substantiate the effectiveness of integrating recursive probabilistic structures with deep learning architectures. The consistent improvements across diverse tasks and datasets demonstrate both the generality and practicality of our approach.