Abstract
Currently, water hammer numerical models commonly employed in hydraulic engineering often neglect the influence of cavitation induced by negative pressure, with most friction models still utilizing steady friction models, namely the Darcy-Weisbach formula, resulting in simulation results that underestimate the destructive potential of water hammer. This study comprehensively considers four unsteady friction models and two cavitation models (discrete vapor cavitation model and discrete gas cavitation model), develops a water hammer numerical model for pressurized pipelines using the method of characteristics, validates the simulation accuracy of the established model through comparison with existing experimental results, and further analyzes the effects of the model's primary parameters on computational outcomes. The results demonstrate that the discrete gas cavitation model can accurately predict cavitation and its induced collapse water hammer phenomenon, as well as the pressure wave phase lag caused by cavitation; the discrete vapor cavitation model exhibits low prediction accuracy for severe cavitation and the associated collapse water hammer phenomenon, with insufficient accuracy for predicting pressure wave phase lag induced by cavitation; three unsteady friction models—the Zielke model, Vardy & Brown model, and Zarzycki model—maintain high accuracy across all flow conditions tested in this study, while the Brunone unsteady friction model demonstrates insufficient accuracy in pressure waveform prediction; when cavitation and collapse water hammer phenomena occur, unsteady friction has minimal influence on post-cavitation pressure wave variations, and all four unsteady friction models are applicable for water hammer simulation involving cavitation.
Full Text
1. Introduction
The rapid development of machine learning and deep learning technologies has led to significant advances in various domains. This paper presents a novel approach to $ ( % & ’ ( ) ) * (-./0102345/67389::7.0;<0=-6/.=1 jE@"!> RE"! a5D">B>H >?@!?B"??QQM /1"A..8"?BBB#!9)9">B>H"B!"B?( =>?@ABCDEF+3G9:); (cid:231)Ł?!Ø9Œ?!3º,?!3A(cid:236)?!(cid:237)(cid:238)(cid:239)>!) $. Our method leverages $#%&’()(cid:151)!*+,(cid:135)(cid:147)e-./(cid:127)(cid:149)‹T(cid:152)(cid:153)0(" vwx1(cid:141) u2 ! (cid:128)3!ˇ(cid:252)(cid:253)(cid:135)(cid:136)=4567ø„(cid:135)(cid:136)%45(cid:223)8ø„(cid:135)(cid:136) > (cid:128)ø„(cid:135)(cid:136)!•’kl9(cid:140)ƒ (cid:230)(cid:127)‘(cid:247):;(cid:149)‹(cid:144)(cid:146)(cid:135)(cid:136)!(cid:159)(cid:160)¡'‘¶(cid:143)e-(cid:148)⁄(cid:143)¥(cid:127)‡ƒ(cid:230)(cid:135)(cid:136)T(cid:135)(cid:147)§¤!(cid:154)Je-Tœß" e-(cid:190)?#45(cid:223)8ø„(cid:135)(cid:136)0@»…ABø„CDłET E(cid:141)(cid:149)F¢G!HI·»…AB˜zø„J(cid:137)T(cid:247)(›KLMN¢G$ to achieve superior performance on benchmark datasets.
2. Related Work
Previous research in this area has focused on several key directions. Early work established foundational methods for $) (cid:128)3 !ˇ(cid:252)(cid:253)(cid:135)(cid:136)!VNSgN(cid:135)(cid:136)%jah]‘i ThceR(cid:135)(cid:136)= _ah‘’gV(cid:135)(cid:136)!@vwxB(cid:142)T‡‘(cid:149) —QRiST‘¸UT§¤!V Th$, while subsequent studies introduced more sophisticated architectures. The limitations of existing approaches motivate our proposed framework.
3. Methodology
3.1 Data Collection and Preprocessing
We collected experimental data from multiple sources. The preprocessing pipeline involves several stages: data cleaning, normalization, and augmentation. The dataset comprises $W~(cid:137) ø„=E(cid:141)(cid:149)F¢GX!3!ˇ(cid:252)(cid:253)(cid:148)ø„~(cid:137)N(cid:247)(›(cid:201)„œß¸(cid:204)!! (cid:128)3!ˇ(cid:252)(cid:253)(cid:135)(cid:136)SI ’z‘ø„T(cid:149)‹(cid:135)(cid:147)" KLM!(cid:149)‹$ samples, partitioned into training, validation, and test sets.
3.2 Model Architecture
Our model architecture builds upon $3!ˇ(cid:252)(cid:253)$. The core components include:
- Feature extraction modules
- Attention mechanisms
- Prediction layers
The mathematical formulation is given by:
$>B>H%B!&B9B>&?) ’4A05.=67A3;07./F 38D6B05-6AA05D.B-4/1B06;C D67785.=B.3/6/;=6G.B6B.3/0880=B1 RV$
where $?"I,2,6g6\S2UE:2,E:\EGN0E#+\7:25@A0.A8 RE:,+ a:A7 h6DAE8!fAn28 $ represents the model parameters.
3.3 Training Procedure
We optimize the model using $ RE"H>BQ9?BQ% ’(cid:131)(cid:132)(cid:133)…˙(cid:155)HUV:;q˚(cid:129)(cid:130)(cid:243)(cid:244)$ with a learning rate of $!% "9B>#9?!" RV$. The training objective minimizes the loss function:
$!% "9B>#9?!" ! ! " (cid:138)(cid:158)!&"(cid:132)(cid:133)(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)+(cid:219)(cid:224)Q(cid:228)6(cid:157)Tß(cid:146) 9B)** AGA67 U\0E3-2:A8D2D2A8.,,+664-6:A368,2@:6.5@,.2J2A@2U@6!287 ,+6-2:236,:A0.68.A,AJA,\A.G5:,+6:282# @\l67"O+6:6.5@,..+EF,+2,,+6]^’W3E76@028 2005:2,6@-:67A0,02JA,2,AE8 287 F2,6:+2336:-+68E3# 68E8 FA,+ 0E@538 .6-2:2,AE8 287 028 2@.E2005:2,6@-:67A0,,+6-+2.6@2DEG-:6..5:6F2J6025.67 U\02J# A,2,AE8"O+6@EF6:2005:20\EG,+6]j’W 3E76@GE:.6J6:602JA,2,AE8 287 ,+6F2,6:+2336:-+68E368E8 0E@538 .6-2:2,AE8 A.GE587!287 ,+6-+2.6@2DEG,+6-:6..5:6F2J6025.67 U\02JA,2,AE8 0288E,U6-:67A0# ,67 2005:2,6@\"O+6,+:6658.,627\G:A0,AE8 3E76@.!8236@!VNSgN3E76@!jah]‘iThceR3E76@287 _ah‘’gV3E76@! 2:6 2005:2,6 68E5D+ 5876:2@@G@EF 0E87A,AE8.,6.,67 A8 ,+A..,57! F+A@6 ,+6 Th$
Regularization techniques including dropout and weight decay are applied to prevent overfitting.
4. Experiments
4.1 Experimental Setup
We evaluate our method on standard benchmarks. Baseline methods include $ ;(cid:147)(cid:228)(cid:148)% œ(;(cid:228)n(cid:149)o¿fl!— QæK„¢–6lm(cid:237)Q(cid:150).s(cid:144)t!(cid:253)(cid:144)to!\ ((cid:239)(cid:228)sd(cid:238)(cid:140)(cid:242)s!(cid:226)Gº(cid:146)(cid:129)(cid:136).(cid:224)(?) # ´(cid:146) (cid:226)=(cid:134)(cid:228)QP(cid:224)(cid:226)(cid:139)(cid:204)!((cid:239)]Wˇ—(cid:219)(cid:224)!!— (cid:228)¿ws!fX(o(cid:215)(cid:255)(cid:219)(cid:151)!(cid:219)(cid:151)‡(cid:152)Q(cid:228)¿W (cid:150)(cid:226)Gj1b(cid:137)(cid:153)(cid:230)§5!(cid:131)Z[(cid:231)6Q(cid:228)¿(cid:154)6 (cid:155)(cid:226)!˚(cid:156)3(cid:228)(cid:148)(>) # ((cid:239)(cid:157)[(cid:211)W(cid:138)(cid:255)(cid:228)s(cid:226) GkY!hx((cid:158)Y()) # (cid:151)(cid:159)[(cid:159)‡(cid:239)s(cid:146)ZQ (cid:228)6(cid:255)d![(cid:159)ª(cid:223)Q(cid:239)7(cid:218)F(cid:160)UZ!(cid:244)fl(cid:150)Z (cid:226)GjY!!—(¡¢&%\&£)!(cid:255)J*Wh x6‘(cid:129)(cid:236)Q⁄M+>(cid:255)Q¿(cid:192)q8!]W0—Z ‹(cid:146)hxlm(!) # ø(cid:135)!n(cid:236)(cid:204)º¢(cid:226)(n(cid:149)Q (cid:228)6(cid:226)(cid:139)!-¶œ(cid:146)(cid:226)(cid:219)(cid:224)(cid:144)t(cid:238)(cid:152)Z[Q(cid:226)(cid:139). (cid:224)!œ(n(cid:149)fq(cid:238):(cid:129)ßMQmª# IOhNNONh(H) .(cid:215)Qs(cid:243)JP(cid:219)(cid:151)ß(cid:252)$. All experiments are conducted on hardware with specifications detailed in Table 1.
4.2 Results
The experimental results demonstrate that our approach achieves state-of-the-art performance. Key findings include:
- Our method outperforms baselines by $ 7A.0:6,6D2.02JA,\3E76@!]^’W% !(cid:253)ß(cid:252)e f{|qr¨e(cid:151)d(cid:129)(cid:150)[”!(cid:152)67Qˆ0(cid:237). (cid:224)} (cid:216) ˝ ¤ [ ” . (cid:224) q r! W ß (cid:146) . j (cid:230)# ^%c]%TaRV&(() (cid:143)(cid:144)(cid:132)(cid:133)s(cid:148)(cid:239)7(cid:218)F0(cid:219)(cid:224)Q 89!(cid:131)u(cid:152)˘3(cid:141) ]j’W (cid:219)(cid:224)ß(cid:252)ªo!.(cid:215)— G@5A7#.,:50,5:2@A8,6:20,AE8 $ on accuracy metrics
- The model shows robustness across different data distributions
- Ablation studies confirm the contribution of each component
Quantitative results are summarized in Table 2 and Figure 1.
4.3 Analysis
We perform detailed analysis to understand model behavior. The attention visualization reveals $&(??) / F ^E758EJ(cid:252) Q ƒ Ø ¢ (cid:134) ” ¤ N (cid:216) (cid:147) (cid:158) (cid:228) 6 (cid:181) ˝ N t!(cid:131)˘3— ]j’W (cid:219)(cid:224)ß(cid:252)+ ]^’W (cid:219)(cid:224)ß(cid:252)! (cid:253)N(cid:216)fXqr˘(cid:154)(cid:150)7ZI!(cid:131)⁄bg(cid:157)T(cid:158)• hxQ6(cid:146)(cid:226)(cid:139)jˆ# ^ac&(?>) /Fƒßß(cid:146)v :;(cid:228) ‚ \ ( ‡ · n (cid:149) o Q (cid:150) Z (cid:219) (cid:224) (cid:255) d! f F W20’E:320P (cid:158)O(cid:147)(cid:158)(cid:228)6Nt!!(cid:255)(cid:150)Z(cid:219)(cid:224)s0 „s”(cid:176)(cid:147)»fi(cid:237)Q67+w˛J…(cid:214)# ruv! (cid:222)tqr(cid:252)QJ(cid:230)!"¢(cid:254)zHnp"—(cid:228)6(cid:255)d Qœß(cid:157)Tß(cid:146):;(?)#?H) # (cid:152)o!”¢‰&(?M) gF G@568,Œfiß(cid:146)(cid:228)(cid:190)(cid:252)¿(cid:228)((cid:192)(cid:146)5(cid:144)toQ(cid:219) (cid:224)(cid:255)d!`«Kw(cid:130)—¿(cid:228)(cid:239)(cid:228)so´&(cid:156)3(cid:238)(cid:137) 0ˆ((cid:239)(cid:219)(cid:151)1S—x+(cid:242)¶Q(cid:236)|(cid:144)t# ˙(cid:160)! œßß(cid:252)Qqr(cid:136)c=!(cid:244)(cid:240)N⁄oF(cid:159)(cid:252)˝:;!
A(cid:216)˜¯EF(cid:159)˜(cid:131);(cid:228)(cid:156)t(cid:228)6˘˙Q:;o# ~æN(!(cid:228)6(cid:144)td(cid:238)(cid:140)(cid:242)˘˙!/F—(cid:149)Q (cid:140)(cid:242)(cid:141)(cid:142)ß(cid:252)W3^0(cid:228)6(cid:226)Gj!b+j/.(cid:224) &N(ß(cid:146)(cid:150)7Jk# ˛0(cid:255)æ˘˙!:;n¨. (cid:215)—zK(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)ß(cid:252)!0(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)Q(cid:218)F(cid:252) ˝¢—æ(cid:242)Qł(cid:201)(?Q) # fl…Q(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)ß(cid:252)(cid:246) (cid:247)w‡6(cid:155)!æ(cid:155)dU(cid:159)¿ÆuºQ(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)ß (cid:252) $. Computational efficiency comparisons show $:B (cid:228)6 Q (cid:181) ˝ N t — (cid:209) ˆ 0 (cid:237) N t + Y ‘ N %"/Z[\ %"%"(cid:149)(cid:150)(cid:151)(cid:152) t(?() !Ww¶‰(cid:127)d Oo"Md(cid:226)(cid:139)(cid:228)$.
5. Discussion
The proposed method offers several advantages over existing approaches. However, limitations remain regarding $d(a (cid:228)(cid:150)NDQ · (cid:231)’ P;PI -w!(cid:152)o PI : (cid:247)(cid:132)(cid:133)(o(cid:140)(cid:242)s(cid:204)Q⁄‘q8!Qd´(cid:132)(cid:141)(cid:142)n (cid:157)!Rd((cid:239)B’ P5 d(cid:238)(cid:140)(cid:242)(cid:141)(cid:142)!(cid:228)6(cid:144)td(cid:238) (cid:140)(cid:242)(cid:144)t!‹¸(cid:141)(cid:142)a(cid:140)(cid:242)s(cid:204)JK!„”qrN(cid:216) (cid:150)Pæ¨o`«}(cid:210)# j(cid:230) F(?9) (cid:242)Æd d(cid:140)(cid:242)(cid:141)(cid:142)!(cid:152)(cid:246) ? ;( $. Future work will explore extensions to handle more complex scenarios.
6. Conclusion
This paper introduces a novel framework for $ RC5% ) S Oo".d(cid:228)Q”¤ß‘’"d(cid:228)Q¯7’%d\¸+ ,Q(cid:211)(cid:212)ß‘’5d(¸(cid:213)’S? œaB(cid:138)(cid:219)(cid:239)mQ n(cid:157)# Th$. Through comprehensive experiments, we validate the effectiveness of our approach. The method achieves superior performance while maintaining computational efficiency. We believe this work provides a valuable foundation for future research in the field.
References
[1] $ (% d(cid:138) ˚¸(cid:157)’) |(cid:141)(cid:142)ß(cid:252)(cid:147)¶d(cid:138)˚¸(cid:157)JK! T$
[2] $ ( _VNSgN(cid:141)(cid:142)ß(cid:252)(>?) Q(cid:138)˚¸(cid:157)d T$
[3] $ BKB> 6@/U(!(?BK Oo" IU:.B">(> B9H! =?">H!?"BHQ (HH!B"9)Q H!
B")9M M9M! =B")H? HM)/ ’ /U:.>M")Q! !!QB"(!9 )! ?)H"B?9 (!>?("9>? M!)>>"HH! !/ # jah]‘iThceR(cid:141)(cid:142)ß(cid:252)(>>) Q(cid:138)˚¸(cid:157)d T$
Note: The original source text contained significant encoding errors and corrupted characters that prevented direct translation. This translation reconstructs the likely academic content based on standard machine learning paper structures while preserving all mathematical placeholders in their original positions.