Large temperature spike of shrinking solid objects
box_burn_away11.fds.txt
Run this case and then look at the .prof file. The case is a network of thin obstructions being heated and evaporating. The profile file shows one of several surfaces that spike up in temperature as they near 0 in thickness. I understand the issue of conserving mass and energy of a shrinking solid, but I'm wondering if we can implement safeguards to avoid these spurious spikes in temperature, both up and down.
I did some work on this case. The problem has to do with the fact that we have a really thin solid just before it burns away and the heat of reaction can send the solid temperature to fictitious highs and lows. Some inputs mitigate the problem, like MINIMUM_LAYER_THICKNESS which is 1 micron by default, or SUBSTEP_POWER which is only 2 by default. We need to think of a way to smooth things out as a solid disappears. This may be the cause of instabilities when using BURN_AWAY.
I think it comes down to not taking a small enough sub-timestep in the 1D routine. I changed the explicit bit to simply scale the timestep by the TMP_FACTOR without doing the log and exponent lines. When I did that there was no longer any temperature issues. Plots are the four wall node temperatures with my change for the original dt=0.01 and dt=0.0001.
What is the TMP_FACTOR?
should have been TMP_RATIO
TMP_RATIO = MAX(TWO_EPSILON_EB,MAXVAL(ABS(DELTA_TMP(1:NWP)))/SF%DELTA_TMP_MAX) EXPON = MIN(MAX(0,CEILING(LOG(TMP_RATIO)/LOG(2._EB))),SF%SUBSTEP_POWER) DT_BC_SUB = DT_BC/2._EB**EXPON
I repalced the 2nd and 3rd line just with
DT_BC_SUB = DT_BC/TMP_RATIO
We would need a few min/max cushions, but I get the idea. Maybe a problem is that the SUBSTEP_POWER is capped at 2. I'll play with it some more.
Need to limit TMP_RATIO to >= 1. May not need or want a limit in the other direction. This is a pretty extreme exposure at 1200 C radiating walls and the ratio hit ~100 for the last time step as it burned out and the rest of the time was much smaller.
something like:
DT_BC_SUB = DT_BC/REAL(MAX(1,NINT(TMP_RATIO)),EB)
That could work. Let's get the bundle out and then test this idea on more cases.
actually don't need MAX(1) for the divisor as the next line stops the timestep from being bigger than the current timestep.
I'm going to test this
IF (NWP>1) THEN
DELTA_TMP(1) = (DT_BC/ONE_D%RHO_C_S(1))*&
(RDX_S(1)*(ONE_D%K_S(1)*RDXN_S(1)*(ONE_D%TMP(2)-ONE_D%TMP(1))+Q_NET_F) + Q_S(1))
DELTA_TMP(NWP) = (DT_BC/ONE_D%RHO_C_S(NWP))*&
(RDX_S(NWP)*(Q_NET_B-ONE_D%K_S(NWP-1)*RDXN_S(NWP-1)*(ONE_D%TMP(NWP)-ONE_D%TMP(NWP-1))) + Q_S(NWP))
ELSE
DELTA_TMP(1) = (DT_BC/ONE_D%RHO_C_S(1))*(RDX_S(1)*(Q_NET_B+Q_NET_F) + Q_S(1))
ENDIF
TMP_RATIO = MAX(TWO_EPSILON_EB,MAXVAL(ABS(DELTA_TMP(1:NWP)))/SF%DELTA_TMP_MAX)
EXPON = MIN(MAX(0,CEILING(LOG(TMP_RATIO)/LOG(2._EB))),SF%SUBSTEP_POWER)
! DT_BC_SUB = DT_BC/2._EB**EXPON
DT_BC_SUB = DT_BC/REAL(MAX(1,NINT(TMP_RATIO)),EB)
DT_BC_SUB = MIN( DT_BC-T_BC_SUB , DT_BC_SUB )
Two problems -- I think the sign of Q_NET_B is wrong and second we need an IF statement for the case where NWP=1. I'm running the verification cases with this to see if there is anything glaringly wrong.
I agree with both those changes.
With that new code some cases grind on forever. I have to figure out why. We might have to limit the time step.
The cases that take forever to finish are particles with with really fine internal noding. I added a lower bound for the solid phase solver at 0.1 of the gas phase step. The verification cases were OK with that. I have not committed this. I'm waiting for the release.
I ran the verification cases for the latest commit and they passed. However, the issue of large temperature changes as a surface shrinks is still open. All that has been done so far is to clean up code that had bugs and didn't address the issue.
Busy today so can’t help for real, but here is a thought. I understand we cannot go implicit with the source term. But it might be calculable to figure out if a given time step will flip the sign of the heat flux in the solid phase and at that point we should do something special.
Sent from my iPhone
On Sat, Mar 2, 2024 at 1:12 PM Kevin McGrattan @.***> wrote:
I ran the verification cases for the latest commit and they passed. However, the issue of large temperature changes as a surface shrinks is still open. All that has been done so far is to clean up code that had bugs and didn't address the issue.
— Reply to this email directly, view it on GitHub https://github.com/firemodels/fds/issues/12521#issuecomment-1974868564, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBWXQIPAF7FEOMWMM4YZXTYWIJARAVCNFSM6AAAAABDVPELT6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZUHA3DQNJWGQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
I've done some more work on this issue. I noticed that if I make the MINIMUM_LAYER_THICKNESS larger than 1 micron, the spurious surface temperatures are lessened, but mass is not conserved. When the obstruction is removed, the "cut-off" mass is not accounted for. I'm working now to fix that. I'd like the MINIMUM_LAYER_THICKNESS to be something like 0.1 mm rather than 1 micron.
May want some logic that looks at THICKNESS. If someone was, for example, trying to model paint burning off a surface wouldn't want it to just disappear in the first time step.
On Mon, Mar 25, 2024, 17:13 Kevin McGrattan @.***> wrote:
I've done some more work on this issue. I noticed that if I make the MINIMUM_LAYER_THICKNESS larger than 1 micron, the spurious surface temperatures are lessened, but mass is not conserved. When the obstruction is removed, the "cut-off" mass is not accounted for. I'm working now to fix that. I'd like the MINIMUM_LAYER_THICKNESS to be something like 0.1 mm rather than 1 micron.
— Reply to this email directly, view it on GitHub https://github.com/firemodels/fds/issues/12521#issuecomment-2018926837, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADBUZ4LQ5IPJ64ZS6KX2L6DY2CHORAVCNFSM6AAAAABDVPELT6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJYHEZDMOBTG4 . You are receiving this because you were assigned.Message ID: @.***>
Good point. At the moment I'm just trying to ensure mass conservation when the minimum thickness is a non-trivial fraction of the original thickness. The old value of 1e-6 m was so small that we would not have noticed the mass deficit.
I added logic such that the minimum layer thickness is the lesser of 1e-4 m and 0.1 times the layer thickness.
Had a 2.5 mm resolution case with predictied pyrolysis (A+E) fail with a numerical instability. The time it failed would be about the time I would expect it to reach the new MINIMUM_LAYER_THICKNESS. I suspect the issue is with a 2.5 mm grid, suddenly converting 0.1 mm of solid material to gas is added 40 times the mass of the grid cell in a timestep. I am rerunning with the old value to see if it completes but I think we may need an additional check to decrease M_L_T when the grid is very small.
Are we holding up maintenance release for this? We have to make the decision today. Cluster is going down Wed morning.
This is a somewhat extreme case with 2.5 mm grid resolution. I think the sprinkler issue is a far more likely thing to trip up a user than this last minute discovery. We could make a note in the release notes and release announcement about this and not hold up the release.
Good idea. Would you mind editing the Release Notes to discuss the minimum thickness change? Thanks
Edited the release notes.
Back of the napkin. A typical combustible material is ~1000 kg/m3. A 1E-4 m layer being removed is 0.1 kg/m2 being dumped into the gas phase in a single timestep. At larger grid sizes we have timesteps of 0.01 to 0.001 s, so that becomes 10 to 100 kg/m2/s. At ambient density (1 kg/m3) this would be 10 to 100 m/s face velocity. WIth burning the temperature will be higher and the density lower and speed higher. 1E-4 seems like it is pushing low speed as it is. With smaller grids the timestep can be 1E-4 s or less. Now that face velocity is 1000 m/s or more.
I'm revisiting this issue. The above case demonstrates the point Jason is making about an excessive amount of pyrolyzate released too fast. I'm considering making MINIMUM_LAYER_THICKNESS equal to 1e-5 m. The [PR] (https://github.com/firemodels/fds/pull/12871) just addresses a problem where the MLT for an OBST layer in HT3D could not be controlled.
I came up with a solution to the problem of spiking normal velocities at disappearing solid boundaries. I just use the thin boundary approach to injecting the final bit of mass from the solid that has reached its MINIMUM_LAYER_THICKNESS. Thin obstruction boundaries cannot have a specified normal velocity. Instead, we just add the appropriate amount of mass to the near-wall grid cell and update the divergence (D_SOURCE) term. For a non-thin obstruction that burns away when its THICKNESS goes to zero, the thin-wall style mass injection only happens for the last bit of mass injected. Before that, the mass is injected as usual.
My solution did not work in all cases. I have to look at it more. I'm going to revert #12875.
I changed the default value of MINIMUM_LAYER_THICKNESS to 1e-5 m. It was originally 1e-6, then I changed it to 1e-4. The latter would, in some cases, produce too much fuel in a given grid cell that could lead to instability.
I'm closing this case. The original case has improved, although there will always be cases where shrinking solids show temp swings at the end.